Author: GETAWAYTHEBERKSHIRES

Home / Author: GETAWAYTHEBERKSHIRES

Over the past century, scientists have become adept at plotting the ecological interactions of the diverse organisms that populate the planet’s forests, plains and seas. They have established powerful mathematical techniques to describe systems ranging from the carbon cycles driven by plants to the predator-prey dynamics that dictate the behavior of lions and gazelles. Understanding the inner workings of microbial communities that can involve hundreds or thousands of microscopic species, however, poses a far greater challenge.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Microbes nourish each other and engage in chemical warfare; their behavior shifts with their spatial arrangements and with the identities of their neighbors; they function as populations of separate species but also as a cohesive whole that can at times resemble a single organism. Data collected from these communities reveal incredible diversity but also hint at an underlying, unifying structure.

Scientists want to tease out what that structure might be—not least because they hope one day to be able to manipulate it. Microbial communities help to define ecosystems of all shapes and sizes: in oceans and soil, in plants and animals. Some health conditions correlate with the balance of microbes in a person’s gut, and for a few conditions, such as Crohn’s disease, there are known causal links to onset and severity. Controlling the balance of microbes in different settings might provide new ways to treat or prevent various illnesses, improve crop productivity or make biofuels.

But to reach that level of control, scientists first have to work out all the ways in which the members of any microbial community interact—a challenge that can become incredibly complicated. In a paper published in Nature Communications last month, a team of researchers led by Yang-Yu Liu, a statistical physicist at Harvard Medical School, presented an approach that gets around some of the formidable obstacles and could enable scientists to analyze a lot of data they haven’t been able to work with.

The paper joins a growing body of work seeking to make sense of how microbes interact, and to illuminate one of the field’s biggest unknowns: whether the main drivers of change in a microbial community are the microbes themselves or the environment around them.

Gleaning More From Snapshots

“We understand so little about the mechanisms underlying how microbes interact with each other,” said Joao Xavier, a computational biologist at Memorial Sloan Kettering Cancer Center, “so trying to understand this problem using methods that come from data analysis is really important at this stage.”

But current strategies for gaining such insights cannot make use of a wealth of data that have already been collected. Existing approaches require time-series data: measurements taken repeatedly from the same hosts or communities over long stretches of time. Starting with an established model of population dynamics for one species, scientists can use those measurements to test assumptions about how certain species affect others over time, and based on what they find out, they then adjust the model to fit the data.

Such time-series data are difficult to obtain, and a lot is needed to get results. Moreover, the samples are not always informative enough to yield reliable inferences, particularly in relatively stable microbial communities. Scientists can get more informative data by adding or removing microbial species to perturb the systems—but doing so poses ethical and practical issues, for example, when studying the gut microbiota of people. And if the underlying model for a system isn’t a good fit, the subsequent analysis can go very far astray.

Because gathering and working with time-series data are so difficult, most measurements of microbes—including the information collected by the Human Microbiome Project, which characterized the microbial communities of hundreds of individuals—tend to fall into a different category: cross-sectional data. Those measurements serve as snapshots of separate populations of microbes during a defined interval, from which a chronology of changes can be inferred. The trade-off is that although cross-sectional data are much more readily available, inferring interactions from them has been difficult. The networks of modeled behaviors they yield are based on correlations rather than direct effects, which limits their usefulness.

Imagine two types of microbes, A and B: When the abundance of A is high, the abundance of B is low. That negative correlation doesn’t necessarily mean that A is directly detrimental to B. It could be that A and B thrive under the opposite environmental conditions, or that a third microbe, C, is responsible for the observed effects on their populations.

But now, Liu and his colleagues claim that cross-sectional data can say something about direct ecological interactions after all. “A method that doesn’t need time-series data would create a lot of possibilities,” Xavier said. “If such a method works, it would open up a bunch of data that’s already out there.”

A Simpler Framework

Liu’s team sifts through those mountains of data by taking a simpler, more fundamental approach: Rather than getting caught up in measuring the specific, finely calibrated effects of one microbial species on another, Liu and his colleagues characterize those interactions with broad, qualitative labels. The researchers simply infer whether the interactions between two species are positive (species A promotes the growth of species B), negative (A inhibits the growth of B) or neutral. They determine those relationships in both directions for every pair of species found in the community.

Liu’s work builds on prior research that used cross-sectional data from communities that differ by only a single species. For instance, if species A grows alone until it reaches an equilibrium, and then B is introduced, it is easy to observe whether B is beneficial, harmful or unrelated to A.

The great advantage of Liu’s technique is that it allows relevant samples to differ by more than one species, heading off what would otherwise be an explosion in the number of samples needed. In fact, according to his study’s findings, the number of required samples scales linearly with the number of microbial species in the system. (By comparison, with some popular modeling-based approaches, the number of samples needed increases with the square of the number of species in the system.) “I consider this really encouraging for when we talk about the network reconstruction of very large, complex ecosystems,” Liu said. “If we collect enough samples, we can map the ecological network of something like the human gut microbiota.”

Those samples allow scientists to constrain the combination of signs (positive, negative, zero) that broadly define the interactions between any two microbial strains in the network. Without such constraints, the possible combinations are astronomical: “If you have 170 species, there are more possibilities than there are atoms in the visible universe,” said Stefano Allesina, an ecologist at the University of Chicago. “The typical human microbiome has more than 10,000 species.” Liu’s work represents “an algorithm that, instead of exhaustively searching among all possibilities, pre-computes the most informative ones and proceeds in a much quicker way,” Allesina said.

Perhaps most important, with Liu’s method, researchers don’t need to presuppose a model of what the interactions among microbes might be. “Those decisions can often be quite subjective and open to conjecture,” said Karna Gowda, a postdoctoral fellow studying complex systems at the University of Illinois, Urbana-Champaign. “The strength of this study [is that] it gets information out of the data without resorting to any particular model.”

Instead, scientists can use the method to verify when a certain community’s interactions follow the equations of classical population dynamics. In those cases, the technique allows them to infer the information their usual methods sacrifice: the specific strengths of those interactions and the growth rates of species. “We can get the real number, not just the sign pattern,” Liu said.

In tests, when given data from microbial communities of eight species, Liu’s technique generated networks of inferred interactions that included 78 percent of those that Jonathan Friedman, a systems biologist at the Hebrew University of Jerusalem and one of Liu’s co-authors, had identified in a previous experiment. “It was better than I expected,” Friedman said. “The mistakes it made were when the real interactions I had measured were weak.”

Liu hopes to eventually use the method to make inferences about communities like those in the human microbiome. For example, he and some of his colleagues posted a preprint on biorxiv.org in June that detailed how one could identify the minimum number of “driver species” needed to push a community toward a desired microbial composition.

A Greater Question

Realistically, Liu’s goal of fine-tuning microbiomes lies far in the future. Aside from the technical difficulties of getting enough of the right data for Liu’s approach to work, some scientists have more fundamental conceptual reservations—ones that tap into a much larger question: Are changes in the composition of a microbial community mainly due to the interactions between the microbes themselves, or to the perturbations in their environment?

Some scientists think it’s impossible to gain valuable information without taking environmental factors into account, which Liu’s method does not. “I’m a bit skeptical,” said Pankaj Mehta, a biophysicist at Boston University. He is doubtful because the method assumes that the relationship between two microbial strains does not change as their shared environment does. If that’s indeed the case, Mehta said, then the method would be applicable. “It would be really exciting if what they’re saying is true,” he said. But he questions whether such cases will be widespread, pointing out that microbes might compete under one set of conditions but help each other in a different environment. And they constantly modify their own surroundings by means of their metabolic pathways, he added. “I’m not sure how you can talk about microbial interactions independent of their environment.”

A more sweeping criticism was raised by Alvaro Sanchez, an ecologist at Yale University who has collaborated with Mehta on mechanistic, resource-based models. He emphasized that the environment overwhelmingly determines the composition of microbial communities. In one experiment, he and his colleagues began with 96 completely different communities. When all were exposed to the same environment, Sanchez said, over time they tended to converge on having the same families of microbes in roughly the same proportions, even though the abundance of each species within the families varied greatly from sample to sample. And when the researchers began with a dozen identical communities, they found that changing the availability of even one sugar as a resource created entirely divergent populations. “The new composition was defined by the carbon [sugar] source,” Sanchez said.

The effects of the microbes’ interactions were drowned out by the environmental influences. “The structure of the community is determined not by what’s there but by the resources that are put in … and what [the microbes] themselves produce,” Mehta said.

That’s why he’s unsure how well Liu’s work will translate into studies of microbiomes outside the laboratory. Any cross-sectional data taken for the human microbiome, he said, would be influenced by the subjects’ different diets.

Liu, however, says this wouldn’t necessarily be the case. In a study published in Nature in 2016, he and his team found that human gut and mouth microbiomes exhibit universal dynamics. “It was a surprising result,” he said, “to have strong evidence of healthy individuals having a similar universal ecological network, despite different diet patterns and lifestyles.”

His new method may help bring researchers closer to unpacking the processes that shape the microbiome—and learning how much of them depends on the species’ relationships rather than the environment.

    More Quanta

  • John Rennie

    Seeing the Beautiful Intelligence of Microbes

  • Jordana Cepelewicz

    Microbes May Rig Their DNA to Speed Up Evolution

  • Emily Singer

    On the Microbial Frontier, Cheaters Rarely Prosper

Researchers in both camps can also work together to provide new insights into microbial communities. The network approach taken by Liu and others, and the more detailed metabolic understanding of microbial interactions, “represent different scales,” said Daniel Segrè, a professor of bioinformatics at Boston University. “It’s essential to see how those scales relate to each other.” Although Segrè himself focuses on molecular, metabolism-based mappings, he finds value in gaining an understanding of more global information. “It’s like, if you know a factory is producing cars, then you also know it has to produce engines and wheels in certain fixed proportions,” he said.

Such a collaboration could have practical applications, too. Xavier and his colleagues have found that the microbiome diversity of cancer patients is a huge predictor of their survival after a bone marrow transplant. The medical treatments that precede transplant—acute chemotherapy, prophylactic antibiotics, irradiation—can leave patients with microbiomes in which one microbe overwhelmingly dominates the composition. Such low diversity is often a predictor of low patient survival: According to Xavier, his colleagues at Sloan Kettering have found that the lowest microbial diversity can leave patients with five times the mortality rate seen in patients with high diversity.

Xavier wants to understand the ecological basis for that loss of microbial diversity, in the hopes of designing preventive measures to maintain the needed variability or interventions to reconstitute it. But to do that, he also needs the information Liu’s method provides about microbial interactions. For example, if a patient takes a narrow-spectrum antibiotic, might that affect a broader spectrum of microbes because of ecological dependencies among them? Knowing how an antibiotic’s effects could propagate throughout a microbial network could help physicians determine whether the drug could cause a huge loss to a patient’s microbiome diversity.

“So both the extrinsic perturbation and the intrinsic properties of the system are important to know,” Xavier said.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Related Video

Science

Using Shark Skin to Fight Against Bacteria

Scientists are looking to an unlikely source for new ways to fight bacteria. Could the skin of a Galapagos shark hold the key to warding off hospital-born bacteria and superbugs?

San Francisco Mayor Ed Lee died in December of 2017; the election to replace him was Tuesday. No one knows who won. Partially that’s because the votes are still trickling in. Mail-in ballots merely had to be postmarked by election day, and as I write the city is reporting 87,000 votes yet to be processed. But that’s not the only roadblock. The other problem is math.

See, the San Francisco mayoral election isn’t just another whoever-gets-the-most-votes-wins sort of deal. No, this race was another example of the kind of cultural innovation that California occasionally looses upon an unsuspecting America, like smartphones and fancy toast. Surprise, you guys! We don’t even vote like y’all out here.

The way it worked is called ranked choice voting, also known as an instant runoff. Voters rank three choices in order of preference. The counting process drops the person with the fewest first-choice votes, reallocates that candidate’s votes to all his or her voters’ second choices, and then repeats. Does this sound insane? Actually, it’s genius. It is also insane.

The mayoral ballot had eight candidates, including unlikely winners like a lawyer who’d run three times before, a holistic health practitioner, and a Republican. San Franciscans coalesced around three: London Breed, Jane Kim, and Mark Leno, all local elected officials with the kinds of intertwined histories that you could only get from two-fisted municipal politics in a region with astronomical amounts of tech money (mostly out of government reach thanks to sweetheart corporate tax deals and a history of failing to tax homeowners on the real value of their property). Breed has the most first-place votes so far—10 percentage points up on Leno, in second—but Kim’s reallocated third-place vote count has given Leno a margin so narrow it’d disappear if you looked at it on-end.

What’s the point of complexifying a straightforward election? The thing is, elections aren’t straightforward. Social choice theory lays out a bunch of different ways a group might make a decision, and “plurality”—whatever gets the most vote wins—is just one. It works great if you have a ballot with only two choices on it. But add more choices, and you have problems.

Daniel Ullman, George Washington University

When Reform Party candidate Jesse Ventura defeated the Republican Norm Coleman and the Democrat Skip Humphrey for governor of Minnesota in 1998, political pundits saw voter disgust with The System at work. Ventura got 37 percent of the vote; Coleman, 35; and Humphrey, 28. But as Emory mathematician Victoria Powers wrote in a 2015 paper, exit polls said that almost everyone who voted for Coleman had Humphrey as a second choice, and Coleman was the second choice of almost everyone who voted for Humphrey. “The voters preferred Coleman to both of the other candidates, and yet he lost the election,” Powers wrote.

That’s plurality. The same problems come up with “antiplurality,” in which everyone says who they hate, and the person with the least votes wins. Both potentially violate Condorcet’s Theorem—as in the philosopher-mathematician the Marquis de Condorcet, who in 1785 said in part that an election should be won by a candidate who’d beat all the other candidates head-to-head. (Sequential pairwise voting, in which you eliminate the losers March Madness style, gives you a clear Condorcet winner … but that winner is different for every order you run the matchups.)

So, yeah, plurality: bad. “It’s very restrictive on voters,” says Daniel Ullman, a mathematician at George Washington University and the co-author of The Mathematics of Politics. “If you allow voters to say who their top two candidates are, or rank all 10 in order, or give approval to those they like or don’t like, or all sorts of other ballots, then things get interesting.”

They do indeed. The other systems let voters express more choice, but they also introduce what mathematicians call paradoxes. Here’s an example: ranked choice voting lacks “monotonicity.” That is to say, people sometimes have to vote against the candidate they’re actually supporting to make a win more likely. “That’s disturbing, because when you go into the ballot box you’re not sure if you should reveal what your true wish is,” Ullman says.

And indeed, some of the campaigning leading up to election day involved telling people which two candidates to vote for, regardless of order—basically, please vote against the other corner of the triangle. Flip side, imagine how different American history might be if the 2000 presidential election (Al Gore virtually tied with George W. Bush, Ralph Nader and Pat Buchanan as spoilers) had been ranked choice.

Ranked-choice and sequential pairwise aren’t even the weirdest possibilities out there. You could assign everyone a score, with some points for top choice, fewer for second, fewer for third, etc. Whoever has the most points at the end wins. That’s a “Borda Count.” Fun problem: In the same election with the same vote counts, plurality, antiplurality, and Borda count could all yield different winners. And Borda violates Condorcet, too. Yiiiiikes.

“There was a meeting of voting system experts a number of years ago, and they voted on which method they liked best. Apparently the plurality method got zero votes,” Ullman says. “One of the favorites was approval, where your ballot is a yes-no choice for each candidate, and whoever gets the most yeses wins.”

Yes, I asked how they chose. “They actually used approval voting,” Ullman says.

So do a lot of professional societies, including mathematicians. You might think that this would yield only the most anodyne, least objectionable choice, but you actually get winners—Condorcet winners!—with broad support. (Engineers don’t like it as much; the Institute of Electrical and Electronic Engineers abandoned the practice.) You can even go harder and hybridize various options, or add ranking to simple yes/no approvals. One downside might be that voters have to have an opinion about everyone on the ballot. “If someone said, ‘you have to submit a preference ballot and you have to rank all 20,’ there’d be a lot of people who would know their first and second choice, and maybe their third, but then say, ‘I’ve never heard of the rest of these people.’”

The mystery of who the next mayor of San Francisco will be wasn’t even primary day’s only drama. Instead of splitting up other races by party, in California everyone goes onto the same ballot, and the top two vote-getters advance to the general election in November. If they’re from the same party? So be it. Except this year Democratic enthusiasm was so high because of, like, everything, that the slates in those races filled with rarin’-to-go Dems representing every wavelength of blue from desaturated purple to deep indigo. That freaked out the national party, which worried that everyone would peel votes from everyone else, locking Democrats out of both slots in four districts when the party is hoping to take control of the House of Representatives. They didn’t get locked out, but the attempt at electoral innovation came from the same spirit.

California has long been willing to perform surgery on democracy to correct flaws both cosmetic and life-threatening. Gilded Age California politics was so corrupt that progressive reformers instituted the initiative process, for example, letting anyone with enough signatures put legislation on a ballot. The top-two primary, also used in Washington and Nebraska, comes in part as a tool in the fight against gerrymandering. Like a lot of Californian ideals, the voting system is a little crazy-making and a little noble all at once.

It’s also doomed. In the 1950s the economist Kenneth Arrow set out to find the one best voting method, one election to rule them all. He ended up proving that there wasn’t one. Arrow’s Impossibility Theorem, for which he won the Nobel Prize in 1971, says that outside a two-choice plurality, no good method exists to make a rock-solid social choice.

But that’s democracy for you. We’re not here to make the union perfect—just more perfect.

On July 1, 2013, Amos Joseph Wells III went to his pregnant girlfriend's home in Fort Worth, Texas, and shot her multiple times in the head and stomach. He then killed her mother and her 10-year-old brother. Wells surrendered voluntarily within hours, and in a tearful jailhouse interview told reporters, "There's no explanation that I could give anyone, or anybody could give anyone, to try to make it seem right, or make it seem rational, to make everybody understand."

Heinous crimes tend to defy comprehension, but some researchers believe neuroscience and genetics could help explain why certain people commit such atrocities. Meanwhile, lawyers are introducing so-called neurobiological evidence into court more than ever.

Take Wells, for instance. His lawyers called on Pietro Pietrini—director of the IMT School for Advanced Studies in Lucca, Italy, and an expert on the neurobiological correlates of antisocial behavior—to testify at their client's trial last year. “Wells had several abnormalities in the frontal regions of his brain, plus a very bad genetic profile," Pietrini says. Scans of the defendant's brain showed abnormally low neuronal activity in his frontal lobe, a condition associated with increased risk of reactive, aggressive, and violent behavior. In Pietrini's estimation, that "bad genetic profile" consisted of low MAOA gene activity—a trait long associated with aggression in people raised in abusive environments—and five other notable genetic variations. To differing degrees, they're linked with a susceptibility to violent behavior, impulsivity, risk-taking, and impaired decision-making.

"What we tried to sustain was that he had some evidence of a neurobiological impairment that would affect his brain function, decision making, and impulse control," Pietrini says. "And this, we hoped, would spare him from the death penalty."

It did not. On November 3, 2016, a Tarrant County jury found Wells guilty of capital murder. Two weeks later, the same jury deliberated Wells' fate for just four hours before sentencing him to die. The decision, as mandated by Texas law, was unanimous.

In front of a different judge or another jury, Wells might have avoided the death penalty. In 2010, lawyers used a brain-mapping technology called quantitative electroencephalography to try to convince a Dade City, Florida, jury that defendant Grady Nelson was predisposed to impulsiveness and violence when he stabbed his wife 61 times before raping and stabbing her 11-year-old daughter. The evidence's sway over at least two jurors locked the jury in a 6-6 split over whether Nelson should be executed, resulting in a recommendation of life without parole.

Nelson's was one of nearly 1,600 court cases examined in a recent analysis of neurobiological evidence in the US criminal justice system. The study, by Duke University bioethicist Nita Farahany, found that the number of judicial opinions mentioning neuroscience or behavioral genetics more than doubled between 2005 and 2012, and that roughly 25 percent of death penalty trials employ neurobiological data in pursuit of a lighter sentence.

Farahany's findings also suggest defense attorneys are applying neuroscientific findings to more than capital murder cases; lawyers are increasingly introducing neuroscientific evidence in cases ranging from burglary and robbery to kidnapping and rape.

"Neuro cases without a doubt are increasing, and they're likely to continue increasing over time" says Farahany, who adds that people appear to be particularly enamored of brain-based explanations. "It’s a much simpler sell to jurors. They seem to believe that it’s much more individualized than population genetics. Also, they can see it, right? You can show somebody a brain scan and say: There. See that? That big thing, in this person’s brain? You don’t have that. I don’t have that. And it affects how this person behaves.”

And courts seem to be buying it. Farahany found that between 20 and 30 percent of defendants who invoke neuroscientific evidence get some kind of break on appeal—a higher success rate than one sees in criminal appeals, in general. (A 2010 analysis of nearly 70,000 US criminal appeals found that only about 12 percent of cases wound up being reversed, remanded, or modified.) At least in the instances Farahany investigated (a small sample, she notes, of criminal cases, 90 percent of which never go to trial), neurobiological evidence seemed to have a small but positive impact on defendants' outcomes.

The looming question—scientifically, legally, philosophically—is whether it should.

Many scientists and legal experts question whether neurobiological evidence belongs in court in the first place. "Most of the time, the science isn’t strong enough," says Stephen Morse, professor of law and psychiatry at the University of Pennsylvania.

Morse calls this the "clear cut" problem: Where the defendant's mental and behavioral state are obvious, you don’t need neurobiological evidence to support it. But in cases where the behavioral evidence is unclear, the brain data or genetic data aren't exact enough to serve as diagnostic markers. "So where we need the help most—where it’s a gray area case, and we’re simply not sure whether the behavioral impairment is sufficient—the scientific data can help us least," says Morse. "Maybe this will change over time, but that’s where we are now.”

You don't have to look hard to see his point. To date, no brain abnormality or genetic variation has been shown to have a deterministic effect on a person's behavior, and it's reasonable to assume that one never will. Medicine, after all, is not physics; your neurobiological state cannot predict that you will engage in violent, criminal, or otherwise antisocial activity, as any researcher will tell you.

But some scientific arguments appear to be more persuasive than others. Brain scans, for example, seem to hold greater sway over the legal system than behavioral genetic analyses. "Most of the evidence right now suggests that genetic evidence, alone, isn’t having much influence on judges and juries," says Columbia psychiatrist Paul Appelbaum, co-author of a recent review, published in Nature Human Behavior, that examines the use of such evidence in criminal court. Juries, he says, might not understand the technical intricacies of genetic evidence. Conversely, juries may simply believe genetic predispositions are irrelevant in determining someone's guilt or punishment.

Still another explanation could be what legal researchers call the double-edged sword phenomenon. "The genetic evidence might indicate a reduced degree of responsibility for my behavior, because I have a genetic variant that you don’t, but at the same time suggest that I'm more dangerous than you are. That if I really can't control my behavior, maybe I'm exactly the kind of person who should be locked up for a longer period of time," Appelbaum says. Whatever the reason for genetic evidence's weak impact, Appelbaum predicts its use in court—absent complementary neurological evidence—will decrease.

That's not necessarily a bad thing. There's considerable disagreement within the scientific community over the influence of so-called gene-environment interactions on human behavior, including ones believed to affect people like Amos Wells.

In their 2014 meta-analysis of the two most commonly studied genetic variants linked to aggression and antisocial behavior (both of which Wells possesses), Emory University psychologists Courtney Ficks and Irwin Waldman concluded that the variants appear to play a "modest" role in antisocial behavior. But they also identified numerous examples of studies bedeviled by methodological and interpretive flaws, susceptibility to error, loose standards for replication, and evidence of publication bias. "Notwithstanding the excitement that many researchers have felt at the prospect of [gene-environment] interactions in the development of complex traits, there is growing evidence that we must be wary of these findings," the researchers wrote.

So then. What should a jury consider in the case of someone like Amos Wells? In his expert report, Pietrini cited Ficks and Waldman's analysis—and more than 80 other papers—to emphasize the modest role of genetic variation in antisocial behavior. And in their cross examination, the prosecution went through several of Pietrini's citations line by line, calling for circumspection. They pointed to the Ficks paper, for instance. They also quoted excerpts that cast behavioral genetics findings in an uncertain light. Lines like this one, from a 2003 paper in Nature about the association of gene variants with anger-related traits: "Nevertheless, our findings warrant further replication to avoid any spurious associations for the example due to the ethnic stratification effects and sampling errors."

Pietrini chuckles when I recount the prosecution's criticisms. "You look at the discussion section of any medical study, and you'll find sentences like that: Needs more research. Needs a larger sample size. Needs to be replicated. Warrants caution. But it doesn't mean that what's been observed is wrong. It means that, as scientists, we're always cautious. Medical science is only ever proven true by history, but Amos Wells, from my point of view, had many genetic and neurological factors that impaired his mental ability. I say that not because I was a consultant to the defense, but in absolute terms."

    More on Genetics

  • Megan Molteni

    To Protect Genetic Privacy, Encrypt Your DNA

  • Andy Greenberg

    Biohackers Encoded Malware in a Strand of DNA

  • Sarah Zhang

    Cheap DNA Sequencing Is Here. Writing DNA Is Next

Pietrini's point gets to the heart of a question still tackled by researchers and legal scholars: When do scientific findings become worthy of legal consideration?

The general assumption is that the same standards that guide the scientific community should guide the law, says Drexel University legal professor Adam Benforado, author of Unfair: The New Science of Criminal Injustice. "But I think that probably shouldn't be the case," he says. "I think when someone is facing the death penalty, they ought to have a right to present neuroscientific or genetic research findings that may not be entirely settled but are sound enough to be published in peer reviewed literature. Because at the end of the day, when someone's life is at stake, to wait for things to be absolutely settled is dangerous. The consequences of inaction are too grave."

That's basically the Supreme Court's stance, too. In the US, the bar for admissibility on mitigating evidence in death penalty proceedings is very low, owing to a Supreme Court ruling in the 1978 trial of Lockett against Ohio. "Essentially, the kitchen sink comes in. And in very few death penalty proceedings will the judge make a searching inquiry into relevance," says Morse, who begrudgingly agrees that neurobiological evidence should be admissible in capital cases, because so much is at stake. "I'd rather it wasn't, because I think it debases the legal process," he says, adding that most neuroscientific and genetic evidence introduced at capital proceedings has more rhetorical relevance than legal relevance.

"What they’re doing is making what I call the fundamental psycho-legal error. This is the belief that once you have found a partially causal explanation for a behavior, then the behavior must be excused altogether. All behavior has causes, including causes at the biological, psychological, and sociological level. But causation is not an excusing condition." If it were, Morse says, no one would be responsible for any behavior.

But that is not the world we live in. Today, in most cases, the law holds people responsible for their actions, not their predispositions. As Wells told his relatives in the courtroom after his sentence was handed down: "I did this. I'm an adult. Don't bear this burden. This burden is mine."

Trial by Tech

  • TrueAllele is a secretive program transforming how courts treat DNA evidence.

  • New Jersey has transformed its court system to be driven by tech—with mixed results.

  • A popular crime-predicting algorithm might not perform much better than a group of untrained humans.

Related Video

Science

Biologist Explains One Concept in 5 Levels of Difficulty – CRISPR

CRISPR is a new biomedical technique that enables powerful gene editing. WIRED challenged biologist Neville Sanjana to explain CRISPR to 5 different people; a child, a teen, a college student, a grad student, and a CRISPR expert.

Can an Airplane Take Off on a Moving Runway?

March 20, 2019 | Story | No Comments

This question is probably as old as the airplane itself. It goes something like this:

The first question a reasonable person would ask is "Where do you get a giant plane-sized treadmill that goes 100 mph?" Yes, that is indeed a good question—but I won't answer it. Instead, I'm going to give this question the best physics answer I can.

Before I do that, I should point out that others have also answered this question (not surprising since it's super old anyway). First, there is the MythBusters episode from 2008. Actually, they didn't answer the question—they did the question. The MythBusters made a giant conveyer belt with a plane on it. It was awesome. Second, there is the xkcd answer to this question (also from 2008).

Now you get my answer. I will answer with different examples.

A Car on a Conveyer Belt

This isn't so difficult. What if I put a car going 100 mph on a conveyer belt that is also going 100 mph? It would look like this (something like this):

Really, there is probably no surprise here. The car's wheels would roll at 100 mph as the treadmill (or conveyer belt) moves back at 100 mph so that the car remains stationary. Actually, here is a slightly cooler example (with the same physics).

Here is an experiment (also from the MythBusters) in which they shot a ball at 60 mph out the back of a truck also going 60 mph. You can see that the ball remains stationary (with respect to the ground).

Super Short Takeoff

Here is a plane from Alaska that takes off in a very short distance.

How does this work? I'll give you a hint—there is a very strong wind blowing into the front of the plane. Without a headwind, this wouldn't happen. But if you think about it, this short take off is very much like the car on the treadmill. For a plane, it doesn't drive on the ground, it "drives" in the air. If the plane has a takeoff speed of 40 mph and is in a 40 mph headwind, it doesn't even need to move at all with respect to the ground.

Plane on a Conveyer Belt

Now let's do it. Here is a short clip from the MythBusters launching a plane on a moving treadmill.

Yes, it takes off. A plane can take off from a runway moving in the opposite direction? But why? It's because the wheels on a plane don't really do anything. The only function for the wheels is to produce low friction between the aircraft and the ground. They don't even push the plane forward—that is done by the propeller. The only difference when launching a plane on a moving runway is that the wheels will spin at twice the normal speed—but that shouldn't matter.

So the plane on a treadmill works, but how about a case where the plane wouldn't take off? What if the plane was more like a glider with motorized wheels? On a normal runway, these motorized wheels would increase the speed of the glider until it reached takeoff speed. But if you put this on a moving runway, the wheels would spin at the right speed and cancel the motion of the treadmill so that the plane would remain motionless and never reach the proper speed for a launch.

OK, so that is the answer to everyone's favorite question. But don't worry, this answer won't stop the endless discussion—that will live on forever.

Related Video

Transportation

Boeing's New 787-10 Takes Off, Bound for Testing Hell

The latest, longest variant of the tech-stuffed, efficiency-focused Dreamliner took off from Boeing’s new factory in Charleston, South Carolina and this is just the beginning of testing hell for the new 787-10.

Schedule surgeries, earnings calls, and therapy appointments before noon. Score the biggest bucks by switching jobs every three to five years. The ideal age to get hitched (and avoid divorce): 32. In his new book, When: The Scientific Secrets of Perfect Timing, Daniel Pink scours psychological, biological, and economic studies to explore what he calls the overlooked dimension. “Timing exerts an incredible effect on what we do and how we do it,” he says. Now that the science of “when” is finally getting its due, Pink shares some temporal hacks to optimize your life.

Snag the first shift. Mood and energy levels follow ­predictable circadian rhythms based on our genetically predisposed chronotype. The average person’s mood bottoms out approximately seven hours after waking, between 2 and 4 pm. That’s when the incidence of on-the-job errors spikes—most notably at hospitals. “My daughter had her wisdom teeth taken out a few months ago,” Pink says. “I said, ‘You are getting the first appointment of the day.’ ”

Brew before you snooze. The benefits of naps have been well-­documented—10 to 20 minutes of shut-eye sharpens cognitive ability without triggering a daze—but a prenap coffee can enhance those benefits. The caffeine kicks in after about 25 minutes for a post-snooze brain boost. “Breaks need to be thought of as integral to the architecture of a day’s work,” Pink says. Columbia University researchers found that judges doled out more lenient sentences after breaks, and a CDC study showed that kids with longer recesses earned better grades.

Bring up the rear. If you’re competing in a large group, wait until the end to showcase your skills. In an eight-country study of American Idol–like contests, later singers advanced more often, and those who went last had a 10 to 15 percent greater chance of moving on. Research suggests that judges start out idealistic—evaluating contestants against an imaginary goal—but then settle into a less lofty baseline. One exception: election ballots. Voters tend to pick the first name on the list, whether they’re choosing city councillors or prom kings.

Resist the “uh oh” effect. Midpoints—of work projects, training regimens, and yeah, life—can either discourage (the “oh no” effect) or motivate (“uh oh, time’s running out”). UCLA researchers studying teamwork found that the majority of groups did almost no work until halfway to the deadline then suddenly buckled down. Set interim goals and adopt the “chain” technique: Pick a task and mark a calendar with an X every day you do it—the string of X’s serves as an incentive.

Get it together. Whether it’s rowing, running, or flash ­mobbing, synchronized activities lower stress and provide mind-body benefits. Singing in groups has been found to improve self-esteem and ­mitigate depression; in particular, choral singing can increase pain thresholds and improve cancer patients’ immune responses. “It operates on a physiological level,” Pink says. “Their hearts even beat in sync.” Next time you hit the karaoke bar, relinquish that glory-­hogging solo.

Today, a teaspoon of spit and a hundred bucks is all you need to get a snapshot of your DNA. But getting the full picture—all 3 billion base pairs of your genome—requires a much more laborious process. One that, even with the aid of sophisticated statistics, scientists still struggle over. It’s exactly the kind of problem that makes sense to outsource to artificial intelligence.

On Monday, Google released a tool called DeepVariant that uses deep learning—the machine learning technique that now dominates AI—to identify all the mutations that an individual inherits from their parents.1 Modeled loosely on the networks of neurons in the human brain, these massive mathematical models have learned how to do things like identify faces posted to your Facebook news feed, transcribe your inane requests to Siri, and even fight internet trolls. And now, engineers at Google Brain and Verily (Alphabet’s life sciences spin-off) have taught one to take raw sequencing data and line up the billions of As, Ts, Cs, and Gs that make you you.

And oh yeah, it’s more accurate than all the existing methods out there. Last year, DeepVariant took first prize in an FDA contest promoting improvements in genetic sequencing. The open source version the Google Brain/Verily team introduced to the world Monday reduced the error rates even further—by more than 50 percent. Looks like grandmaster Ke Jie isn’t be the only one getting bested by Google’s AI neural networks this year.

DeepVariant arrives at a time when healthcare providers, pharma firms, and medical diagnostic manufacturers are all racing to capture as much genomic information as they can. To meet the need, Google rivals like IBM and Microsoft are all moving into the healthcare AI space, with speculation about whether Apple and Amazon will follow suit. While DeepVariant’s code comes at no cost, that isn’t true of the computing power required to run it. Scientists say that expense is going to prevent it from becoming the standard anytime soon, especially for large-scale projects.

But DeepVariant is just the front end of a much wider deployment; genomics is about to go deep learning. And once you go deep learning, you don’t go back.

It’s been nearly two decades since high-throughput sequencing escaped the labs and went commercial. Today, you can get your whole genome for just $1,000 (quite a steal compared to the $1.5 million it cost to sequence James Watson’s in 2008).

But the data produced by today’s machines still only produce incomplete, patchy, and glitch-riddled genomes. Errors can get introduced at each step of the process, and that makes it difficult for scientists to distinguish the natural mutations that make you you from random artifacts, especially in repetitive sections of a genome.

See, most modern sequencing technologies work by taking a sample of your DNA, chopping it up into millions of short snippets, and then using fluorescently-tagged nucleotides to produce reads—the list of As, Ts, Cs, and Gs that correspond to each snippet. Then those millions of reads have to be grouped into abutting sequences and aligned with a reference genome. From there they can go on to variant calling—identifying where an individual's genes differ from the reference.1 A number of software programs exist to help do that. FreeBayes, VarDict, Samtools, and the most well-used, GATK, depend on sophisticated statistical approaches to spot mutations and filter out errors. Each tool has strengths and weaknesses, and scientists often wind up having to use them in conjunction.

No one knows the limitations of the existing technology better than Mark DePristo and Ryan Poplin. They spent five years creating GATK from whole cloth. This was 2008: no tools, no bioinformatics formats, no standards. “We didn’t even know what we were trying to compute!” says DePristo. But they had a north star: an exciting paper that had just come out, written by a Silicon Valley celebrity named Jeff Dean. As one of Google’s earliest engineers, Dean had helped design and build the fundamental computing systems that underpin the tech titan’s vast online empire. DePristo and Poplin used some of those ideas to build GATK, which became the field’s gold standard.

But by 2013, the work had plateaued. “We tried almost every standard statistical approach under the sun, but we never found an effective way to move the needle,” says DePristo. “It was unclear after five years whether it was even possible to do better.” DePristo left to pursue a Google Ventures-backed start-up called SynapDx that was developing a blood test for autism. When that folded two years later, one of its board members, Andrew Conrad (of Google X, then Google Life Sciences, then Verily) convinced DePristo to join the Google/Alphabet fold. He was reunited with Poplin, who had joined up the month before.

And this time, Dean wasn’t just a citation; he was their boss.

As the head of Google Brain, Dean is the man behind the explosion of neural nets that now prop up all the ways you search and tweet and snap and shop. With his help, DePristo and Poplin wanted to see if they could teach one of these neural nets to piece together a genome more accurately than their baby, GATK.

The network wasted no time in making them feel obsolete. After training it on benchmark datasets of just seven human genomes, DeepVariant was able to accurately identify those single nucleotide swaps 99.9587 percent of the time. “It was shocking to see how fast the deep learning models outperformed our old tools,” says DePristo. Their team submitted the results to the PrecisionFDA Truth Challenge last summer, where it won a top performance award. In December, they shared them in a paper published on bioRxiv.

DeepVariant works by transforming the task of variant calling—figuring out which base pairs actually belong to you and not to an error or other processing artifact—into an image classification problem. It takes layers of data and turns them into channels, like the colors on your television set. In the first working model they used three channels: The first was the actual bases, the second was a quality score defined by the sequencer the reads came off of, the third contained other metadata. By compressing all that data into an image file of sorts, and training the model on tens of millions of these multi-channel “images,” DeepVariant began to be able to figure out the likelihood that any given A or T or C or G either matched the reference genome completely, varied by one copy, or varied by both.

But they didn’t stop there. After the FDA contest they transitioned the model to TensorFlow, Google's artificial intelligence engine, and continued tweaking its parameters by changing the three compressed data channels into seven raw data channels. That allowed them to reduce the error rate by a further 50 percent. In an independent analysis conducted this week by genomics computing platform, DNAnexus, DeepVariant vastly outperformed GATK, Freebayes, and Samtools, sometimes reducing errors by as much as 10-fold.

“That shows that this technology really has an important future in the processing of bioinformatic data,” says DNAnexus CEO, Richard Daly. “But it’s only the opening chapter in a book that has 100 chapters.” Daly says he expects this kind of AI to one day actually find the mutations that cause disease. His company received a beta version of DeepVariant, and is now testing the current model with a limited number of its clients—including pharma firms, big health care providers, and medical diagnostic companies.

    More on Genetics

  • Megan Molteni

    You Can Get Your Whole Genome Sequenced. But Should You?

  • Sarah Zhang

    Cheap DNA Sequencing Is Here. Writing DNA Is Next

  • Megan Molteni

    Helix’s Bold Plan to Be Your One Stop Personal Genomics Shop

To run DeepVariant effectively for these customers, DNAnexus has had to invest in newer generation GPUs to support its platform. The same is true for Canadian competitor, DNAStack, which plans to offer two different versions of DeepVariant—one tuned for low cost and one tuned for speed. Google’s Cloud Platform already supports the tool, and the company is exploring using the TPUs (tensor processing units) that connect things like Google Search, Street View, and Translate to accelerate the genomics calculations as well.

DeepVariant’s code is open-source so anyone can run it, but to do so at scale will likely require paying for a cloud computing platform. And it’s this cost—computationally and in terms of actual dollars—that have researchers hedging on DeepVariant’s utility.

“It’s a promising first step, but it isn’t currently scalable to a very large number of samples because it’s just too computationally expensive,” says Daniel MacArthur, a Broad/Harvard human geneticist who has built one of the largest libraries of human DNA to date. For projects like his, which deal in tens of thousands of genomes, DeepVariant is just too costly. And, just like current statistical models, it can only work with the limited reads produced by today’s sequencers.

Still, he thinks deep learning is here to stay. “It’s just a matter of figuring out how to combine better quality data with better algorithms and eventually we’ll converge on something pretty close to perfect,” says MacArthur. But even then, it’ll still just be a list of letters. At least for the foreseeable future, we’ll still need talented humans to tell us what it all means.

1 Correction 12/12/17 4:28pm EST An earlier version of this article incorrectly referred to what DeepVariant does as "assembling genomes." The tool calls variants, which is an important part of the genotyping process, but one not involved in genome assembly. WIRED regrets the error.

Related Video

Science

Crispr Gene Editing Explained

Maybe you've heard of Crispr, the gene editing tool that could forever change life. So what is it and how does it work? Let us explain.

It seems like I have been slightly obsessed with flashlights for quite some time. Perhaps it started when the Maglite lights became popular in the '80s. It was that mini Maglite that ran on 2 AA batteries that I really liked. It was small enough that you could carry around and bright enough that it could actually be useful. When I was a bit older, I would even build and modify my own flashlights. One of my favorites was an underwater light I used for cave diving. It ran on a large lead-acid battery and powered a 25 Watt projector bulb. That thing was great (but not super portable).

These days, you can get some of these super-bright LED lights. They're cheap and they last a long time, so I guess there's no more point to trying to find the best flashlight anymore. But there's always a point to doing some physics! So my next step is to analyze the brightness—that's what I do.

The flashlight I have is listed at 900 lumens. But what the heck is a lumen?

Really, the study of light is really old. Back in the day, they didn't use fancy LED lights or measure stuff with computers. No, they just used candles. In fact, the unit of the lumen comes from a candle. Yes, an actual candle. Technically, it's from a standard candle—which is a candle that produces a flame of a particular, reproducible brightness. As a candle makes light, it spreads out and decreases in intensity, and if you were to integrate the intensity over a whole sphere surrounding a candle, you would get the total output (technically called the luminous flux—which sounds pretty cool). One candle would have a brightness of 4π lumens (the 4π comes from the area of a sphere).

But wait! How is this different than total power output? Yes, if you have a light it does emit energy per time that could be calculated as power (in Watts). For a typical light, much of this power would be in forms that are not detectable by a human eye—like stuff in the infrared spectrum. So, the brightness in lumens is only the human detectable stuff. Really, this is for the best since in earlier days of science the only light detector that could be used was the human eye.

If I want to measure the brightness of a flashlight, I don't really want to use my eyes. Eyes don't give a numerical value as an output. This means I would have to do something weird in order to convince you that a light has a particular brightness. Instead I am going to use a light sensor (I'm using this one)—but it doesn't measure the brightness. It measures the luminous intensity (at least that's what I call it but others call it the illuminance). This is the visible brightness per unit of area and it is measured in lux.

Since this might be getting confusing, let me use an analogy. Suppose you have a sheet of paper that is in the rain and getting wet. There are two things to consider for your wet paper. First, there is the rate of rain. It can rain hard or soft. This is like the luminous intensity. Second, there is the rate that water hits the paper. This depends on both the rain rate and the size of the paper. The total rain hitting the paper would be like the luminous flux (in lumens).

In order to calculate the luminous flux, I can measure the luminous intensity and assume it's constant over some area. The product of intensity and area would give me the luminous flux in lumens. So here's what I'm going to do. I will take my flashlight and shine it at the light sensor. I will also measure the size of the light spot that it makes (in square meters). If the intensity is constant over the whole area then I just need to multiple the area and the value of lux.

However, since I like to make graphs, I will do a slight variation. My flashlight can create a variable spot size. This means I can plot the intensity vs. one divided by the spot area. The slope of this line should be the luminous flux. Let's do it. (Here is the online plot)

This looks pretty linear—so that's good. However, I get a slope of 130 lumens—that's bad, because the flashlight is listed at 900 lumens. OK, here are some possible problems with this value.

  • The flashlight is wrong. It's not 900 lumens, it's only 130 lumens.
  • I made an assumption that was incorrect. I assumed the luminous intensity was fairly constant over the whole spot. If it's not constant, I would have to measure intensity at different locations and integrate over the area. I don't think my assumption was super bonkers wrong—if it was, then the plot of intensity vs. one over the area wouldn't be linear.
  • It's possible that the listed flashlight brightness was based on a different wavelength than the one measured by my light sensor. This shouldn't happen, though.
  • Finally, it's always possible that I messed up somewhere.

Whatever the brightness, it's still a pretty nice flashlight. But maybe I should build one of these water-cooled LED lights. That would be fun.

Related Video

Gadgets

Staff Picks: White & White Digital LED Clock

WIRED reviews the White Digital LED Clock

Your brain is one enigmatic hunk of meat—a wildly complex web of neurons numbering in the tens of billions. But years ago, when you were in the womb, it began as little more than a scattering of undifferentiated stem cells. A series of genetic signals transformed those blank slates into the wrinkly, three-pound mass between your ears. Scientists think the way your brain looks and functions can be traced back to those first molecular marching orders—but precisely when and where these genetic signals occur has been difficult to pin down.

Today, things are looking a little less mysterious. A team of researchers led by neuroscientists at UC San Francisco has spent the last five years compiling the first entries in what they hope will become an extensive atlas of gene expression in the developing human brain. The researchers describe the project in the latest issue of Science, and, with the help of researchers at UC Santa Cruz, they've made an interactive version of the atlas freely available online.

"The point of creating an atlas like this is to understand how we make a human brain," says study coauthor Aparna Bhaduri. To do that, she and her colleagues analyzed not only how gene expression varies from cell to cell, but where and at what stages of brain development those genes come into play.

Crucially, the researchers performed that analysis at the level of individual brain cells—a degree of specificity neuroscientists have struggled to achieve in the past. That's huge, in part because it gives researchers their clearest picture yet of where and in which cells certain genes are expressed in the fetal brain. But it also means scientists can begin to characterize early brain cells not according to things like their shape and location (two variables that neuroscientists have long used to classify cellular types and subtypes), but by the bits of DNA they turn on and off. As developmental neurobiologist Ed Lein, who was unaffiliated with the study, says: "This is not the first study in this area by any means, but the single cell technique is a game changer."

Lein would know. An investigator at the Allen Institute for Brain Science (a key institutional player in the mission to map the human brain, and the home of several ambitious brain atlas projects from the past decade), he and his colleagues performed a similar survey of gene expression in developing human brains in 2014. To build it, they sliced fetal brain tissue into tiny pieces and scanned them for gene expression. But even after dissecting them as finely as possible, Lein says the cell populations of the resulting brain bits were still extremely diverse. Even a microscopic speck of gray matter contains a menagerie of functionally distinct cells, from astrocytes to neurons to microglia (though, to be perfectly frank, neuroscientists aren't even sure how many cell types exist).

"When we measured the genes in our samples," says Lein, "what we actually saw was the average output of all the cells in that sample." When they were through, Lein and his colleagues had mapped the location and activity of some 20,000 genes in anatomical regions throughout the brain. But they still didn't know which individual cells those genes came from.

UCSF's new brain atlas doesn't span as many regions as the Allen Institute's (not yet, at least), but what anatomical areas it does covers it does with much greater specificity. "The difference between previous studies and ours is the difference between a smoothie and a fruit salad," says study coauthor Alex Pollen. "They have the same ingredients, but one mixes them together and the other looks at them individually."

The UCSF researchers focused on regions of the developing brain that eventually become the basal ganglia, which helps orchestrate things like voluntary motor control, and the cerebral cortex, the largest region of the mammalian brain and the seat of many human cognitive abilities. By examining the expression of individual cells from 48 brains at various stages of development, the researchers were able to trace a handful of genetic and developmental patterns to 11 broad categories of cell—and make a number of unexpected observations.

    More Brain Mapping

  • Greg Miller

    Scientists Create an Unprecedented Map of the Developing Human Brain

  • Megan Molteni

    These Neurons are Alive and Firing. And You Can Watch Them In 3-D

  • Katie M. Palmer

    A First Big Step Toward Mapping the Human Brain

"One big surprise is that region-specific neurons seem to form very early in the developmental process," says neurobiologist Tomasz Nowakowski, who led the study. That includes neurons in the prefrontal cortex, whose formation neuroscientists have long theorized to be influenced by sensory experience. But the new atlas suggests those areas begin to take shape before sensory experiences even have a chance to take place. That's the kind of finding that could fundamentally alter neuroscientists' understanding of the structure and function of adult brains.

The project's other takeaways are too numerous to list here. But that's the thing about brain atlases: They're great at generating questions. "These things are foundational," Lein says. "The reason these atlases are valuable is you can do a systematic analysis in one fell swoop and generate 10,000 hypotheses." Testing the hypotheses generated from this latest atlas will hinge on researchers' ability to access and add to it, which is why Nowakowski and his colleagues collaborated with UC Santa Cruz computer programmer Jim Kent to visualize their database in an interactive, online visualization.

Researchers will also want to cross-reference this atlas with similar projects. After all, there's more than one way to map a brain. You can classify its neurons by shape, location, or the genes they express. You can map the gene expression of old brains, young brains, and brains of different species. A recent project from the Allen Institute even classifies neurons according to their electrical activity. Brain atlases are like puzzle pieces that way: The more you have to work with, the easier it is to see the big picture—and how all the pieces fit.

Related Video

Science

Neuroscientist Explains One Concept in 5 Levels of Difficulty

The Connectome is a comprehensive diagram of all the neural connections existing in the brain. WIRED has challenged neuroscientist Bobby Kasthuri to explain this scientific concept to 5 different people; a 5 year-old, a 13 year-old, a college student, a neuroscience grad student and a connectome entrepreneur.

How to Run Up a Wall—With Physics!

March 20, 2019 | Story | No Comments

I can't decide if this looks like something from a super hero movie or from a video game. In this compilation video of crazy stunts, a guy somehow finds a way to bound up between two walls by jumping from one to the other. "Somehow," of course, means with physics: This move is based on the momentum principle and friction. Could you pull it off? Probably not. But you can at least do the math.

The first key to this move is the momentum principle, where momentum is the product of an object's mass and velocity. The momentum principle says that in order to have a change in momentum, you need a net force acting on an object. This can be described with the following vector equation. Note that everyone uses the symbol "p" to represent momentum.

How about a quick example to show how this works? Take an object like a pencil, a ball, or a sandwich and hold it out at arm's length. Now let go of the object. After your hand releases contact with the object (in my mind, it's a sandwich), there is only one force acting—the gravitational force pulling down. What does this force do to the object? It changes the object's momentum in the direction of the force. So after 0.1 seconds, the object's momentum will be in the downward direction which means it speeds up (since the mass is constant). After the next 0.1 second, the object gets even faster. In fact, the sandwich will continue to speed up as it falls until there is another force acting on it (from the floor) to slow the sandwich down. Don't worry, you can still eat it if you get it before the five second rule is over.

The second key idea needed to run up a wall is friction. Friction is a force that acts on an object when two surfaces are pushed together. For a fairly reliable model of this force, we can say that the friction force is parallel to the surfaces interacting with a magnitude that is proportional to the force with which these two surfaces are pushed together. This would be modeled as the following equation:

This expression is for the maximum friction force between two surfaces. In the equation, μs is the coefficient of static friction that depends on the two materials interacting and N is the force pushing the two surfaces together (we call this the normal force).

So you can see that this friction will be necessary to run up the wall. However, you can't just run up a vertical wall because there would be no (or very little) normal force between your foot and the wall. With no normal force, there is no friction. That's bad. You fall.

Now for the video. The guy is able to run up the wall by first moving towards, then moving away from the wall. This means that there is a change in momentum (since it changed direction) which causes a force to change this momentum. In this case, the force is from the wall. The problem is that you can only push on the wall for a short amount of time before you move away from it and lose contact. What makes it work here is that there is another wall on the other side so that the runner can then switch and repeat the move again.

Here is a diagram, this should help.

If I can estimate the change in momentum and the time interval for one of these wall jumps, I can calculate the force from the wall and then the required frictional force. Let's do it.

For this analysis, I will need to get an approximate position of the human in each frame and I can do this with video analysis. I'm making some guesses on the size of things, but I suspect it's close enough for a rough value of friction. Although the camera zooms and pans during the motion, I can correct for that with some software. Here's what it looks like.

But that's not what I want. I want to look at the change in momentum. Here is a look at just the x-motion during this wall climb.

From the slope of this position-time graph, I can get the x-velocity before the collision with the wall with a value of 1.39 m/s. After the collision, the dude is moving with an x-velocity of -2.23 m/s. If I assume a human mass of 75 kg, this is change in x-momentum of -271.5 kg*m/s. Also looking at the graph, I get an interaction time of about 0.2 seconds. The change in x-momentum divided by the time interval gives the average x-force with a value of 1357.5 Newtons (in the negative x-direction). For the imperial people, that is a force of 305 pounds. Yes, that's a lot—but it's just for a short period.

Since this force is in the x-direction, it is the same as the normal force that pushes between the foot and the wall. Using the model of friction above, I can solve for the coefficient of friction since the magnitude of friction must be the weight (75 kg x 9.8 N/kg = 735 N). This means the minimum coefficient of static friction must be 0.54 (there are no units for this coefficient). And that's just fine. This table of coefficients of friction lists rubber on concrete with a range of 0.6 to 0.85—so this is entirely plausible.

Still, I wouldn't recommend you try any old "plausible" move. It takes a long while to train from plausible to possible.

Related Video

Science

Science of Sport: Gymnastics

Charlotte Drury, Maggie Nichols, and Aly Raisman talk to WIRED about the skill, precision, and control they employ when performing various Gymnastic moves and when training for the Olympics.

3 Smart Things: The Hidden Lives of Liquids

March 20, 2019 | Story | No Comments

1. Most substances make a clean break between their liquid and solid states. But liquid crystals straddle the boundary: They flow smoothly, like water, while maintaining a crystalline structure. A tiny jolt of electricity aligns the molecules in the same direction and allows them to rotate light—an effect you see when the pixels in your LCD television or smartphone flip on and off to form pretty images.

2. Our body’s natural lubricant, saliva, does double duty by sloshing away bacteria and neutralizing your mouth’s acidity. There’s high demand for the liquid. Human salivary glands pump out about a quart a day—a Big Gulp! The dry-of-mouth can always purchase backup sprays, gels, and swabs on the $1 billion artificial saliva market.

3. Packed with enough dissolved oxygen, liquid becomes breathable. While water can’t hold the requisite O2, the synthetic oil perfluorocarbon can absorb three times more oxygen than blood, meaning you could survive prolonged submersion (and even take a selfie—perfluorocarbon doesn’t harm electronics). This property also makes doctors hopeful that the oil could one day help soothe the frail lungs of premature newborns.

Adapted from Liquid Rules: The Delightful and Dangerous Substances That Flow Through Our Lives, by Mark Miodownik, out February 19.