Category: Story

Home / Category: Story

Can an Airplane Take Off on a Moving Runway?

March 20, 2019 | Story | No Comments

This question is probably as old as the airplane itself. It goes something like this:

The first question a reasonable person would ask is "Where do you get a giant plane-sized treadmill that goes 100 mph?" Yes, that is indeed a good question—but I won't answer it. Instead, I'm going to give this question the best physics answer I can.

Before I do that, I should point out that others have also answered this question (not surprising since it's super old anyway). First, there is the MythBusters episode from 2008. Actually, they didn't answer the question—they did the question. The MythBusters made a giant conveyer belt with a plane on it. It was awesome. Second, there is the xkcd answer to this question (also from 2008).

Now you get my answer. I will answer with different examples.

A Car on a Conveyer Belt

This isn't so difficult. What if I put a car going 100 mph on a conveyer belt that is also going 100 mph? It would look like this (something like this):

Really, there is probably no surprise here. The car's wheels would roll at 100 mph as the treadmill (or conveyer belt) moves back at 100 mph so that the car remains stationary. Actually, here is a slightly cooler example (with the same physics).

Here is an experiment (also from the MythBusters) in which they shot a ball at 60 mph out the back of a truck also going 60 mph. You can see that the ball remains stationary (with respect to the ground).

Super Short Takeoff

Here is a plane from Alaska that takes off in a very short distance.

How does this work? I'll give you a hint—there is a very strong wind blowing into the front of the plane. Without a headwind, this wouldn't happen. But if you think about it, this short take off is very much like the car on the treadmill. For a plane, it doesn't drive on the ground, it "drives" in the air. If the plane has a takeoff speed of 40 mph and is in a 40 mph headwind, it doesn't even need to move at all with respect to the ground.

Plane on a Conveyer Belt

Now let's do it. Here is a short clip from the MythBusters launching a plane on a moving treadmill.

Yes, it takes off. A plane can take off from a runway moving in the opposite direction? But why? It's because the wheels on a plane don't really do anything. The only function for the wheels is to produce low friction between the aircraft and the ground. They don't even push the plane forward—that is done by the propeller. The only difference when launching a plane on a moving runway is that the wheels will spin at twice the normal speed—but that shouldn't matter.

So the plane on a treadmill works, but how about a case where the plane wouldn't take off? What if the plane was more like a glider with motorized wheels? On a normal runway, these motorized wheels would increase the speed of the glider until it reached takeoff speed. But if you put this on a moving runway, the wheels would spin at the right speed and cancel the motion of the treadmill so that the plane would remain motionless and never reach the proper speed for a launch.

OK, so that is the answer to everyone's favorite question. But don't worry, this answer won't stop the endless discussion—that will live on forever.

Related Video

Transportation

Boeing's New 787-10 Takes Off, Bound for Testing Hell

The latest, longest variant of the tech-stuffed, efficiency-focused Dreamliner took off from Boeing’s new factory in Charleston, South Carolina and this is just the beginning of testing hell for the new 787-10.

Schedule surgeries, earnings calls, and therapy appointments before noon. Score the biggest bucks by switching jobs every three to five years. The ideal age to get hitched (and avoid divorce): 32. In his new book, When: The Scientific Secrets of Perfect Timing, Daniel Pink scours psychological, biological, and economic studies to explore what he calls the overlooked dimension. “Timing exerts an incredible effect on what we do and how we do it,” he says. Now that the science of “when” is finally getting its due, Pink shares some temporal hacks to optimize your life.

Snag the first shift. Mood and energy levels follow ­predictable circadian rhythms based on our genetically predisposed chronotype. The average person’s mood bottoms out approximately seven hours after waking, between 2 and 4 pm. That’s when the incidence of on-the-job errors spikes—most notably at hospitals. “My daughter had her wisdom teeth taken out a few months ago,” Pink says. “I said, ‘You are getting the first appointment of the day.’ ”

Brew before you snooze. The benefits of naps have been well-­documented—10 to 20 minutes of shut-eye sharpens cognitive ability without triggering a daze—but a prenap coffee can enhance those benefits. The caffeine kicks in after about 25 minutes for a post-snooze brain boost. “Breaks need to be thought of as integral to the architecture of a day’s work,” Pink says. Columbia University researchers found that judges doled out more lenient sentences after breaks, and a CDC study showed that kids with longer recesses earned better grades.

Bring up the rear. If you’re competing in a large group, wait until the end to showcase your skills. In an eight-country study of American Idol–like contests, later singers advanced more often, and those who went last had a 10 to 15 percent greater chance of moving on. Research suggests that judges start out idealistic—evaluating contestants against an imaginary goal—but then settle into a less lofty baseline. One exception: election ballots. Voters tend to pick the first name on the list, whether they’re choosing city councillors or prom kings.

Resist the “uh oh” effect. Midpoints—of work projects, training regimens, and yeah, life—can either discourage (the “oh no” effect) or motivate (“uh oh, time’s running out”). UCLA researchers studying teamwork found that the majority of groups did almost no work until halfway to the deadline then suddenly buckled down. Set interim goals and adopt the “chain” technique: Pick a task and mark a calendar with an X every day you do it—the string of X’s serves as an incentive.

Get it together. Whether it’s rowing, running, or flash ­mobbing, synchronized activities lower stress and provide mind-body benefits. Singing in groups has been found to improve self-esteem and ­mitigate depression; in particular, choral singing can increase pain thresholds and improve cancer patients’ immune responses. “It operates on a physiological level,” Pink says. “Their hearts even beat in sync.” Next time you hit the karaoke bar, relinquish that glory-­hogging solo.

Today, a teaspoon of spit and a hundred bucks is all you need to get a snapshot of your DNA. But getting the full picture—all 3 billion base pairs of your genome—requires a much more laborious process. One that, even with the aid of sophisticated statistics, scientists still struggle over. It’s exactly the kind of problem that makes sense to outsource to artificial intelligence.

On Monday, Google released a tool called DeepVariant that uses deep learning—the machine learning technique that now dominates AI—to identify all the mutations that an individual inherits from their parents.1 Modeled loosely on the networks of neurons in the human brain, these massive mathematical models have learned how to do things like identify faces posted to your Facebook news feed, transcribe your inane requests to Siri, and even fight internet trolls. And now, engineers at Google Brain and Verily (Alphabet’s life sciences spin-off) have taught one to take raw sequencing data and line up the billions of As, Ts, Cs, and Gs that make you you.

And oh yeah, it’s more accurate than all the existing methods out there. Last year, DeepVariant took first prize in an FDA contest promoting improvements in genetic sequencing. The open source version the Google Brain/Verily team introduced to the world Monday reduced the error rates even further—by more than 50 percent. Looks like grandmaster Ke Jie isn’t be the only one getting bested by Google’s AI neural networks this year.

DeepVariant arrives at a time when healthcare providers, pharma firms, and medical diagnostic manufacturers are all racing to capture as much genomic information as they can. To meet the need, Google rivals like IBM and Microsoft are all moving into the healthcare AI space, with speculation about whether Apple and Amazon will follow suit. While DeepVariant’s code comes at no cost, that isn’t true of the computing power required to run it. Scientists say that expense is going to prevent it from becoming the standard anytime soon, especially for large-scale projects.

But DeepVariant is just the front end of a much wider deployment; genomics is about to go deep learning. And once you go deep learning, you don’t go back.

It’s been nearly two decades since high-throughput sequencing escaped the labs and went commercial. Today, you can get your whole genome for just $1,000 (quite a steal compared to the $1.5 million it cost to sequence James Watson’s in 2008).

But the data produced by today’s machines still only produce incomplete, patchy, and glitch-riddled genomes. Errors can get introduced at each step of the process, and that makes it difficult for scientists to distinguish the natural mutations that make you you from random artifacts, especially in repetitive sections of a genome.

See, most modern sequencing technologies work by taking a sample of your DNA, chopping it up into millions of short snippets, and then using fluorescently-tagged nucleotides to produce reads—the list of As, Ts, Cs, and Gs that correspond to each snippet. Then those millions of reads have to be grouped into abutting sequences and aligned with a reference genome. From there they can go on to variant calling—identifying where an individual's genes differ from the reference.1 A number of software programs exist to help do that. FreeBayes, VarDict, Samtools, and the most well-used, GATK, depend on sophisticated statistical approaches to spot mutations and filter out errors. Each tool has strengths and weaknesses, and scientists often wind up having to use them in conjunction.

No one knows the limitations of the existing technology better than Mark DePristo and Ryan Poplin. They spent five years creating GATK from whole cloth. This was 2008: no tools, no bioinformatics formats, no standards. “We didn’t even know what we were trying to compute!” says DePristo. But they had a north star: an exciting paper that had just come out, written by a Silicon Valley celebrity named Jeff Dean. As one of Google’s earliest engineers, Dean had helped design and build the fundamental computing systems that underpin the tech titan’s vast online empire. DePristo and Poplin used some of those ideas to build GATK, which became the field’s gold standard.

But by 2013, the work had plateaued. “We tried almost every standard statistical approach under the sun, but we never found an effective way to move the needle,” says DePristo. “It was unclear after five years whether it was even possible to do better.” DePristo left to pursue a Google Ventures-backed start-up called SynapDx that was developing a blood test for autism. When that folded two years later, one of its board members, Andrew Conrad (of Google X, then Google Life Sciences, then Verily) convinced DePristo to join the Google/Alphabet fold. He was reunited with Poplin, who had joined up the month before.

And this time, Dean wasn’t just a citation; he was their boss.

As the head of Google Brain, Dean is the man behind the explosion of neural nets that now prop up all the ways you search and tweet and snap and shop. With his help, DePristo and Poplin wanted to see if they could teach one of these neural nets to piece together a genome more accurately than their baby, GATK.

The network wasted no time in making them feel obsolete. After training it on benchmark datasets of just seven human genomes, DeepVariant was able to accurately identify those single nucleotide swaps 99.9587 percent of the time. “It was shocking to see how fast the deep learning models outperformed our old tools,” says DePristo. Their team submitted the results to the PrecisionFDA Truth Challenge last summer, where it won a top performance award. In December, they shared them in a paper published on bioRxiv.

DeepVariant works by transforming the task of variant calling—figuring out which base pairs actually belong to you and not to an error or other processing artifact—into an image classification problem. It takes layers of data and turns them into channels, like the colors on your television set. In the first working model they used three channels: The first was the actual bases, the second was a quality score defined by the sequencer the reads came off of, the third contained other metadata. By compressing all that data into an image file of sorts, and training the model on tens of millions of these multi-channel “images,” DeepVariant began to be able to figure out the likelihood that any given A or T or C or G either matched the reference genome completely, varied by one copy, or varied by both.

But they didn’t stop there. After the FDA contest they transitioned the model to TensorFlow, Google's artificial intelligence engine, and continued tweaking its parameters by changing the three compressed data channels into seven raw data channels. That allowed them to reduce the error rate by a further 50 percent. In an independent analysis conducted this week by genomics computing platform, DNAnexus, DeepVariant vastly outperformed GATK, Freebayes, and Samtools, sometimes reducing errors by as much as 10-fold.

“That shows that this technology really has an important future in the processing of bioinformatic data,” says DNAnexus CEO, Richard Daly. “But it’s only the opening chapter in a book that has 100 chapters.” Daly says he expects this kind of AI to one day actually find the mutations that cause disease. His company received a beta version of DeepVariant, and is now testing the current model with a limited number of its clients—including pharma firms, big health care providers, and medical diagnostic companies.

    More on Genetics

  • Megan Molteni

    You Can Get Your Whole Genome Sequenced. But Should You?

  • Sarah Zhang

    Cheap DNA Sequencing Is Here. Writing DNA Is Next

  • Megan Molteni

    Helix’s Bold Plan to Be Your One Stop Personal Genomics Shop

To run DeepVariant effectively for these customers, DNAnexus has had to invest in newer generation GPUs to support its platform. The same is true for Canadian competitor, DNAStack, which plans to offer two different versions of DeepVariant—one tuned for low cost and one tuned for speed. Google’s Cloud Platform already supports the tool, and the company is exploring using the TPUs (tensor processing units) that connect things like Google Search, Street View, and Translate to accelerate the genomics calculations as well.

DeepVariant’s code is open-source so anyone can run it, but to do so at scale will likely require paying for a cloud computing platform. And it’s this cost—computationally and in terms of actual dollars—that have researchers hedging on DeepVariant’s utility.

“It’s a promising first step, but it isn’t currently scalable to a very large number of samples because it’s just too computationally expensive,” says Daniel MacArthur, a Broad/Harvard human geneticist who has built one of the largest libraries of human DNA to date. For projects like his, which deal in tens of thousands of genomes, DeepVariant is just too costly. And, just like current statistical models, it can only work with the limited reads produced by today’s sequencers.

Still, he thinks deep learning is here to stay. “It’s just a matter of figuring out how to combine better quality data with better algorithms and eventually we’ll converge on something pretty close to perfect,” says MacArthur. But even then, it’ll still just be a list of letters. At least for the foreseeable future, we’ll still need talented humans to tell us what it all means.

1 Correction 12/12/17 4:28pm EST An earlier version of this article incorrectly referred to what DeepVariant does as "assembling genomes." The tool calls variants, which is an important part of the genotyping process, but one not involved in genome assembly. WIRED regrets the error.

Related Video

Science

Crispr Gene Editing Explained

Maybe you've heard of Crispr, the gene editing tool that could forever change life. So what is it and how does it work? Let us explain.

It seems like I have been slightly obsessed with flashlights for quite some time. Perhaps it started when the Maglite lights became popular in the '80s. It was that mini Maglite that ran on 2 AA batteries that I really liked. It was small enough that you could carry around and bright enough that it could actually be useful. When I was a bit older, I would even build and modify my own flashlights. One of my favorites was an underwater light I used for cave diving. It ran on a large lead-acid battery and powered a 25 Watt projector bulb. That thing was great (but not super portable).

These days, you can get some of these super-bright LED lights. They're cheap and they last a long time, so I guess there's no more point to trying to find the best flashlight anymore. But there's always a point to doing some physics! So my next step is to analyze the brightness—that's what I do.

The flashlight I have is listed at 900 lumens. But what the heck is a lumen?

Really, the study of light is really old. Back in the day, they didn't use fancy LED lights or measure stuff with computers. No, they just used candles. In fact, the unit of the lumen comes from a candle. Yes, an actual candle. Technically, it's from a standard candle—which is a candle that produces a flame of a particular, reproducible brightness. As a candle makes light, it spreads out and decreases in intensity, and if you were to integrate the intensity over a whole sphere surrounding a candle, you would get the total output (technically called the luminous flux—which sounds pretty cool). One candle would have a brightness of 4π lumens (the 4π comes from the area of a sphere).

But wait! How is this different than total power output? Yes, if you have a light it does emit energy per time that could be calculated as power (in Watts). For a typical light, much of this power would be in forms that are not detectable by a human eye—like stuff in the infrared spectrum. So, the brightness in lumens is only the human detectable stuff. Really, this is for the best since in earlier days of science the only light detector that could be used was the human eye.

If I want to measure the brightness of a flashlight, I don't really want to use my eyes. Eyes don't give a numerical value as an output. This means I would have to do something weird in order to convince you that a light has a particular brightness. Instead I am going to use a light sensor (I'm using this one)—but it doesn't measure the brightness. It measures the luminous intensity (at least that's what I call it but others call it the illuminance). This is the visible brightness per unit of area and it is measured in lux.

Since this might be getting confusing, let me use an analogy. Suppose you have a sheet of paper that is in the rain and getting wet. There are two things to consider for your wet paper. First, there is the rate of rain. It can rain hard or soft. This is like the luminous intensity. Second, there is the rate that water hits the paper. This depends on both the rain rate and the size of the paper. The total rain hitting the paper would be like the luminous flux (in lumens).

In order to calculate the luminous flux, I can measure the luminous intensity and assume it's constant over some area. The product of intensity and area would give me the luminous flux in lumens. So here's what I'm going to do. I will take my flashlight and shine it at the light sensor. I will also measure the size of the light spot that it makes (in square meters). If the intensity is constant over the whole area then I just need to multiple the area and the value of lux.

However, since I like to make graphs, I will do a slight variation. My flashlight can create a variable spot size. This means I can plot the intensity vs. one divided by the spot area. The slope of this line should be the luminous flux. Let's do it. (Here is the online plot)

This looks pretty linear—so that's good. However, I get a slope of 130 lumens—that's bad, because the flashlight is listed at 900 lumens. OK, here are some possible problems with this value.

  • The flashlight is wrong. It's not 900 lumens, it's only 130 lumens.
  • I made an assumption that was incorrect. I assumed the luminous intensity was fairly constant over the whole spot. If it's not constant, I would have to measure intensity at different locations and integrate over the area. I don't think my assumption was super bonkers wrong—if it was, then the plot of intensity vs. one over the area wouldn't be linear.
  • It's possible that the listed flashlight brightness was based on a different wavelength than the one measured by my light sensor. This shouldn't happen, though.
  • Finally, it's always possible that I messed up somewhere.

Whatever the brightness, it's still a pretty nice flashlight. But maybe I should build one of these water-cooled LED lights. That would be fun.

Related Video

Gadgets

Staff Picks: White & White Digital LED Clock

WIRED reviews the White Digital LED Clock

Your brain is one enigmatic hunk of meat—a wildly complex web of neurons numbering in the tens of billions. But years ago, when you were in the womb, it began as little more than a scattering of undifferentiated stem cells. A series of genetic signals transformed those blank slates into the wrinkly, three-pound mass between your ears. Scientists think the way your brain looks and functions can be traced back to those first molecular marching orders—but precisely when and where these genetic signals occur has been difficult to pin down.

Today, things are looking a little less mysterious. A team of researchers led by neuroscientists at UC San Francisco has spent the last five years compiling the first entries in what they hope will become an extensive atlas of gene expression in the developing human brain. The researchers describe the project in the latest issue of Science, and, with the help of researchers at UC Santa Cruz, they've made an interactive version of the atlas freely available online.

"The point of creating an atlas like this is to understand how we make a human brain," says study coauthor Aparna Bhaduri. To do that, she and her colleagues analyzed not only how gene expression varies from cell to cell, but where and at what stages of brain development those genes come into play.

Crucially, the researchers performed that analysis at the level of individual brain cells—a degree of specificity neuroscientists have struggled to achieve in the past. That's huge, in part because it gives researchers their clearest picture yet of where and in which cells certain genes are expressed in the fetal brain. But it also means scientists can begin to characterize early brain cells not according to things like their shape and location (two variables that neuroscientists have long used to classify cellular types and subtypes), but by the bits of DNA they turn on and off. As developmental neurobiologist Ed Lein, who was unaffiliated with the study, says: "This is not the first study in this area by any means, but the single cell technique is a game changer."

Lein would know. An investigator at the Allen Institute for Brain Science (a key institutional player in the mission to map the human brain, and the home of several ambitious brain atlas projects from the past decade), he and his colleagues performed a similar survey of gene expression in developing human brains in 2014. To build it, they sliced fetal brain tissue into tiny pieces and scanned them for gene expression. But even after dissecting them as finely as possible, Lein says the cell populations of the resulting brain bits were still extremely diverse. Even a microscopic speck of gray matter contains a menagerie of functionally distinct cells, from astrocytes to neurons to microglia (though, to be perfectly frank, neuroscientists aren't even sure how many cell types exist).

"When we measured the genes in our samples," says Lein, "what we actually saw was the average output of all the cells in that sample." When they were through, Lein and his colleagues had mapped the location and activity of some 20,000 genes in anatomical regions throughout the brain. But they still didn't know which individual cells those genes came from.

UCSF's new brain atlas doesn't span as many regions as the Allen Institute's (not yet, at least), but what anatomical areas it does covers it does with much greater specificity. "The difference between previous studies and ours is the difference between a smoothie and a fruit salad," says study coauthor Alex Pollen. "They have the same ingredients, but one mixes them together and the other looks at them individually."

The UCSF researchers focused on regions of the developing brain that eventually become the basal ganglia, which helps orchestrate things like voluntary motor control, and the cerebral cortex, the largest region of the mammalian brain and the seat of many human cognitive abilities. By examining the expression of individual cells from 48 brains at various stages of development, the researchers were able to trace a handful of genetic and developmental patterns to 11 broad categories of cell—and make a number of unexpected observations.

    More Brain Mapping

  • Greg Miller

    Scientists Create an Unprecedented Map of the Developing Human Brain

  • Megan Molteni

    These Neurons are Alive and Firing. And You Can Watch Them In 3-D

  • Katie M. Palmer

    A First Big Step Toward Mapping the Human Brain

"One big surprise is that region-specific neurons seem to form very early in the developmental process," says neurobiologist Tomasz Nowakowski, who led the study. That includes neurons in the prefrontal cortex, whose formation neuroscientists have long theorized to be influenced by sensory experience. But the new atlas suggests those areas begin to take shape before sensory experiences even have a chance to take place. That's the kind of finding that could fundamentally alter neuroscientists' understanding of the structure and function of adult brains.

The project's other takeaways are too numerous to list here. But that's the thing about brain atlases: They're great at generating questions. "These things are foundational," Lein says. "The reason these atlases are valuable is you can do a systematic analysis in one fell swoop and generate 10,000 hypotheses." Testing the hypotheses generated from this latest atlas will hinge on researchers' ability to access and add to it, which is why Nowakowski and his colleagues collaborated with UC Santa Cruz computer programmer Jim Kent to visualize their database in an interactive, online visualization.

Researchers will also want to cross-reference this atlas with similar projects. After all, there's more than one way to map a brain. You can classify its neurons by shape, location, or the genes they express. You can map the gene expression of old brains, young brains, and brains of different species. A recent project from the Allen Institute even classifies neurons according to their electrical activity. Brain atlases are like puzzle pieces that way: The more you have to work with, the easier it is to see the big picture—and how all the pieces fit.

Related Video

Science

Neuroscientist Explains One Concept in 5 Levels of Difficulty

The Connectome is a comprehensive diagram of all the neural connections existing in the brain. WIRED has challenged neuroscientist Bobby Kasthuri to explain this scientific concept to 5 different people; a 5 year-old, a 13 year-old, a college student, a neuroscience grad student and a connectome entrepreneur.

How to Run Up a Wall—With Physics!

March 20, 2019 | Story | No Comments

I can't decide if this looks like something from a super hero movie or from a video game. In this compilation video of crazy stunts, a guy somehow finds a way to bound up between two walls by jumping from one to the other. "Somehow," of course, means with physics: This move is based on the momentum principle and friction. Could you pull it off? Probably not. But you can at least do the math.

The first key to this move is the momentum principle, where momentum is the product of an object's mass and velocity. The momentum principle says that in order to have a change in momentum, you need a net force acting on an object. This can be described with the following vector equation. Note that everyone uses the symbol "p" to represent momentum.

How about a quick example to show how this works? Take an object like a pencil, a ball, or a sandwich and hold it out at arm's length. Now let go of the object. After your hand releases contact with the object (in my mind, it's a sandwich), there is only one force acting—the gravitational force pulling down. What does this force do to the object? It changes the object's momentum in the direction of the force. So after 0.1 seconds, the object's momentum will be in the downward direction which means it speeds up (since the mass is constant). After the next 0.1 second, the object gets even faster. In fact, the sandwich will continue to speed up as it falls until there is another force acting on it (from the floor) to slow the sandwich down. Don't worry, you can still eat it if you get it before the five second rule is over.

The second key idea needed to run up a wall is friction. Friction is a force that acts on an object when two surfaces are pushed together. For a fairly reliable model of this force, we can say that the friction force is parallel to the surfaces interacting with a magnitude that is proportional to the force with which these two surfaces are pushed together. This would be modeled as the following equation:

This expression is for the maximum friction force between two surfaces. In the equation, μs is the coefficient of static friction that depends on the two materials interacting and N is the force pushing the two surfaces together (we call this the normal force).

So you can see that this friction will be necessary to run up the wall. However, you can't just run up a vertical wall because there would be no (or very little) normal force between your foot and the wall. With no normal force, there is no friction. That's bad. You fall.

Now for the video. The guy is able to run up the wall by first moving towards, then moving away from the wall. This means that there is a change in momentum (since it changed direction) which causes a force to change this momentum. In this case, the force is from the wall. The problem is that you can only push on the wall for a short amount of time before you move away from it and lose contact. What makes it work here is that there is another wall on the other side so that the runner can then switch and repeat the move again.

Here is a diagram, this should help.

If I can estimate the change in momentum and the time interval for one of these wall jumps, I can calculate the force from the wall and then the required frictional force. Let's do it.

For this analysis, I will need to get an approximate position of the human in each frame and I can do this with video analysis. I'm making some guesses on the size of things, but I suspect it's close enough for a rough value of friction. Although the camera zooms and pans during the motion, I can correct for that with some software. Here's what it looks like.

But that's not what I want. I want to look at the change in momentum. Here is a look at just the x-motion during this wall climb.

From the slope of this position-time graph, I can get the x-velocity before the collision with the wall with a value of 1.39 m/s. After the collision, the dude is moving with an x-velocity of -2.23 m/s. If I assume a human mass of 75 kg, this is change in x-momentum of -271.5 kg*m/s. Also looking at the graph, I get an interaction time of about 0.2 seconds. The change in x-momentum divided by the time interval gives the average x-force with a value of 1357.5 Newtons (in the negative x-direction). For the imperial people, that is a force of 305 pounds. Yes, that's a lot—but it's just for a short period.

Since this force is in the x-direction, it is the same as the normal force that pushes between the foot and the wall. Using the model of friction above, I can solve for the coefficient of friction since the magnitude of friction must be the weight (75 kg x 9.8 N/kg = 735 N). This means the minimum coefficient of static friction must be 0.54 (there are no units for this coefficient). And that's just fine. This table of coefficients of friction lists rubber on concrete with a range of 0.6 to 0.85—so this is entirely plausible.

Still, I wouldn't recommend you try any old "plausible" move. It takes a long while to train from plausible to possible.

Related Video

Science

Science of Sport: Gymnastics

Charlotte Drury, Maggie Nichols, and Aly Raisman talk to WIRED about the skill, precision, and control they employ when performing various Gymnastic moves and when training for the Olympics.

3 Smart Things: The Hidden Lives of Liquids

March 20, 2019 | Story | No Comments

1. Most substances make a clean break between their liquid and solid states. But liquid crystals straddle the boundary: They flow smoothly, like water, while maintaining a crystalline structure. A tiny jolt of electricity aligns the molecules in the same direction and allows them to rotate light—an effect you see when the pixels in your LCD television or smartphone flip on and off to form pretty images.

2. Our body’s natural lubricant, saliva, does double duty by sloshing away bacteria and neutralizing your mouth’s acidity. There’s high demand for the liquid. Human salivary glands pump out about a quart a day—a Big Gulp! The dry-of-mouth can always purchase backup sprays, gels, and swabs on the $1 billion artificial saliva market.

3. Packed with enough dissolved oxygen, liquid becomes breathable. While water can’t hold the requisite O2, the synthetic oil perfluorocarbon can absorb three times more oxygen than blood, meaning you could survive prolonged submersion (and even take a selfie—perfluorocarbon doesn’t harm electronics). This property also makes doctors hopeful that the oil could one day help soothe the frail lungs of premature newborns.

Adapted from Liquid Rules: The Delightful and Dangerous Substances That Flow Through Our Lives, by Mark Miodownik, out February 19.

Nearly 300 million years ago, a curious creature called Orobates pabsti walked the land. Animals had just begun pulling themselves out of the water and exploring the big, dry world, and here was the plant-eating tetrapod Orobates, making its way on four legs. Paleontologists know it did so because one particularly well-preserved fossil has, well, four legs. And luckily enough, scientists also discovered fossilized footprints, or trackways, to match.

The assumption has been that Orobates—a cousin of the amniote lineage, which today includes mammals and reptiles—and other early tetrapods hadn’t yet evolved an “advanced” gait, instead dragging themselves along more like salamanders. But today, in an epically multidisciplinary paper in Nature, researchers detail how they married paleontology, biomechanics, computer simulations, live animal demonstrations, and even an Orobates robot to determine that the ancient critter probably walked in a far more advanced way than was previously believed possible. And that has big implications for the understanding of how locomotion evolved on land, not to mention how scientists study the ways extinct animals of all types got around.

Taken alone, a fossil skeleton or fossil trackways aren’t enough to divine how an animal moved. “The footprints only show you what their feet are doing,” says biomechanist John Hutchinson at the Royal Veterinary College, coauthor on the new paper, “because there's so many degrees of freedom, or different ways a joint can move.” Humans, after all, share an anatomy but can manage lots of silly ways to walk with the same equipment.

Without the footprints, the researchers wouldn’t be able to tell with much confidence how the fossil skeleton moved. And without the skeleton, they wouldn’t be able to fully parse the footprints. But with both, they could calculate hundreds of possible gaits for Orobates, from the less advanced belly-dragging of a skink to the more advanced, higher posture of a crocodilian running on land.

They then used a computer simulation to toy with the parameters, such as how much the spine bends back and forth as the animal moves. “The simulation basically told us the forces on the animal, and gave us some estimates of how the mechanics of the animal may have worked overall,” says Hutchinson.

You can actually play with the parameters yourself with this fantastic interactive the team put together. Seriously, click on it and play along with me.

The dots in the three-dimensional graphs are possible gaits. Blue dots get high scores, and red dots get low scores. Double-click on one and below you’ll see that particular gait at work in simulation. You’ll notice that the red dots make for gaits that look a bit … ungainly. Dark blue dots, however, look like they’re a more reasonable way for a tetrapod to move. At bottom you’ll see videos of extant species like the iguana and caiman (a small crocodilian). It was observations of these species that helped the researchers determine what biomechanical factors are important, such as how much the spine bends.

A few other parameters: The sliders on the left let you monkey with things like power expenditure. Slide it to the right and you’ll notice the good blue dots disappear.

Here’s where things get tricky, though. Power efficiency is key to survival, of course, but it’s not the only constraint in biomechanics. “Not all animals optimize for energy, especially species that only use short bursts of locomotion,” says Humboldt University of Berlin evolutionary biologist John Nyakatura, lead author on the paper. “Obviously for species that travel long distances, energy efficiency is very important. But for other species it might be less important.”

Another factor is something called bone collision (which is a great name for a metal band). When you’re putting together a fossil skeleton, you don’t know how much cartilage surrounded the joints, because that stuff rotted away long ago. And different kinds of animals have different amounts of cartilage.

So that’s a big unknown with Orobates. In the interactive, you can dial the bone collision up and down with the slider at left. “You can allow bones to collide freely or just gently touch,” says Hutchinson. “Or you can dial it up to a level of 4 and allow no collisions, which is basically saying there must be a substantial space between the joints.” Notice how that changes the dots in the graph: The more collision you prevent, the fewer the potential gaits. “Whereas if you allow plenty of collision, there's just more possibilities for the limb to move.”

Now, the robot. The team designed OroBOT to closely match the anatomy of Orobates. It’s of course simplified from the pure biology, but it’s still quite complicated as robots go. Each limb is made up of five actuated joints (“actuators” being the fancy robotics term for motors), while the spine has eight actuated joints that allow it to bend back and forth. In the interactive, you can play with the amount of spine bending with a slider at left, and see how dramatically that changes the gait. Also, take a look at the video of the caiman in there to see just how much its own spine bends as it moves.

The beauty of the simulation is you can run all kinds of different gaits relatively quickly. But not so with a robot. “Running too many experiments with a physical platform is quite time-expensive, and you can also damage the platform,” says coauthor and roboticist Kamilo Melo of the Swiss Federal Institute of Technology Lausanne. Running simulations helped whittle down the list.

“In the end we have several gaits we know are quite good, and those are the kinds of gaits we actually test with the real robot,” adds Melo.

What they found was that given the skeletal anatomy and matching trackways, it was likely that Orobates walked fairly upright, more like a caiman than a salamander. “Previously it was assumed that only the amniotes evolved this advanced terrestrial locomotion,” says Nyakatura. “That it is already present in Orobates demonstrates that we have to assume that locomotor diversity to be present a bit earlier.” An important confirmation from the trackways: There are no markings that would correspond to a dragging tail.

So thanks to a heady blend of disparate disciplines, the researchers could essentially resurrect a long-dead species to determine how it may have walked. “Because they have brought digital modeling and robotics and all those things together to bear on this one animal, we can be pretty confident that they've come up with a reasonable suggestion for how it moved,” says paleontologist Stuart Sumida of California State University San Bernardino. He’s got unique insight here, by the way: He helped describe Orobates in the first place 15 years ago.

It’s key to also consider where Sumida and his colleagues found the fossil, in Germany. Around 300 million years ago, there was no running water at the dig site. And it’s running water that paleontologists typically rely on to preserve specimens in mud. “This was an utterly terrestrial environment that just happened to flood occasionally,” says Sumida. “And so you get a very unusual snapshot of what life was like not in the water.”

The upright gait of Orobates, then, would make sense. “This is a thing that walked around with great facility on the land, and this is exactly what the geology suggested,” says Sumida. What that means, he adds, is that Orobates and perhaps other early land-going species adapted to their environment faster than expected.

As the Bee Gees once said: “You can tell by the way I use my walk, I’m a comfortably terrestrial early tetrapod, no time to talk.”

For the last two days, a colossal, coursing stream of super-soaked subtropical air has been pummeling California with record-shattering amounts of moisture. On Wednesday, parts of northern California received more snow in a day than New England cities like Boston have seen all winter. On Thursday, Palm Springs got eight months’ worth of rain in as many hours. In San Diego and Los Angeles, brown water thick with desert dust flooded streets, triggered mudslides, and opened up sinkholes.

The 300-mile-wide, 1,000-mile-long atmospheric river that carried all this precipitation is starting to dry up, and the worst of the drench-fest is over. But all the new rainfall records highlight the fact that atmospheric rivers, while long a distinctive feature of weather in the American West, are intensifying in a climate-changed world.

If you haven’t heard the phrase “atmospheric river” before, don’t feel too bad. It’s a meteorological term of art that hasn’t yet cracked the pop cultural lexicon, unlike some of its flashier cousins—the polar vortex, bomb cyclone, and fire clouds, to name a few. Even the American Meteorological Society only added a definition for atmospheric river to its glossary last year.

The phenomenon itself isn’t a new one: For a long time it’s been pretty normal for California to receive most of its yearly precipitation in just a few big storms. Most of those multiday deluges are the product of atmospheric rivers, high-altitude streams of air that originate near the equator and are packed with water vapor. But it’s only been in the last decade or so that scientists have learned enough about this type of weather system to tell the difference between beneficial, run-of-the-mill storms that keep water reserves full and disastrous storms that overwhelm dams, levees, and reservoirs, like the one that pummeled California this week. As that balancing act gets even tougher for the region’s water managers, some scientists are making a push to put a number on those differences, in the same way you would a tornado or a hurricane.

“Your typical weather forecast displays a symbol—a sun for sunny days, a cloud for cloudy days. But the rain cloud symbol doesn’t really describe if it’s going to be a few showers or one of these more unusually substantial storms,” says F. Marty Ralph, a research meteorologist at UC San Diego’s Scripps Institution of Oceanography and director of its Center for Western Weather and Water Extremes. He’s been spearheading a multiyear effort to develop a five-category scale for diagnosing the strength of atmospheric rivers so that water managers, emergency personnel, and the general public can quickly get a grasp on just how destructive (or beneficial) the next storm will be.

LEARN MORE

The WIRED Guide to Climate Change

Ralph’s team unveiled their AR Cat scale earlier this month, in an article published in the Bulletin of the American Meteorological Society. The key feature it uses to assess the severity of such storms is the amount of water vapor flowing horizontally in the air. Called integrated vapor transport, or IVT, this number tells you how much fuel is feeding the system.

It’s not an easy number to calculate. To do it well requires taking multiple wind and water vapor measurements across miles of atmosphere. In the same way that terrestrial rivers flow at different rates at different depths, the water vapor molecules in atmospheric rivers travel at different speeds in the air column. Adding them all up vertically gives you the true measure of how strong a storm really is. Ralph’s team classifies storms as atmospheric rivers if they’re moving more than 250 kilograms of water per meter per second, ranging up from weak to moderate, strong, extreme, and exceptional.

But strength alone doesn’t predict how dangerous a storm will be. That’s why the AR Cat scale combines a storm’s IVT with how long it’s expected to linger. Storms that blow through in fewer than 24 hours get downgraded by one category, whereas storms that last longer than 48 hours immediately get bumped up a notch. So an “extreme” storm could be either a Cat 3 (balance of beneficial and hazardous), Cat 4 (mostly hazardous), or Cat 5 (hazardous) depending on what it does once it makes landfall.

That’s because the longer a storm hovers over land, funneling many Mississippi Rivers’ worth of moisture into its watersheds, the more strain it puts on those systems. The most destructive hurricanes in recent memory—Harvey in Texas and Florence in North Carolina—proved so catastrophic because they stalled over land, inundating those areas with multiple days of intense rainfall. But the current hurricane scales, which are based on wind speed, don’t take time into account. “With atmospheric rivers we had the opportunity to bake those numbers in from the very beginning,” says Ralph.

The AR Cat scale is, of course, only as reliable as the forecast model it’s built upon. And accurately predicting atmospheric rivers has long frustrated meteorological researchers. Models built on satellite data regularly flub the location of landfall by 250 miles, even when the storm is just three days out. Some of that data got a signal boost this week, as GOES-17, NOAA’s next-generation satellite, became operational over the western part of the United States.

GOES-17’s powerful new camera will fill in important gaps, especially over the Pacific Ocean, where coverage was previously sparse. “It was like watching a black-and-white television, and now we have full HD,” says Scott Rowe, a meteorologist with the National Weather Service’s Bay Area station. The new satellite also refreshes data at a much higher rate—taking a new image once every five minutes as opposed to every 10 or 15. In special circumstances, NWS forecasters can request to crank it up one notch further. On Thursday, when Rowe’s office was busy trying to predict where the California storm would go next, GOES-17 was snapping and sending images once every minute.

But according to Ralph, the new satellite’s not a complete fix for atmospheric river forecasting, because high clouds can mask what’s going on inside the storm. More fruitful are the regular reconnaissance missions Ralph has been coordinating for the past three years, sending US Air Force pilots in hurricane hunter airplanes to crisscross incoming streams of hot, wet air. At regular intervals they drop meteorological sensing devices known as dropsondes, which draw a more intimate portrait of each’s storm’s potential for precipitation.

It’s all part of a broader effort to help stewards of the region’s freshwater resources make better decisions about whether to keep water and risk a flooding event, or let it out ahead of the storm and risk it being a bust. The AR Cat scale, which Ralph says still needs some tuning to better articulate the risks and benefits of different kinds of storms, is aimed at making those decisions for reservoir operators as easy as one, two, three, four, five.

Knowing that a storm like the one that hit this week is a Cat 4 atmospheric river may not mean much to the average person just yet. Calibrating an arbitrary value to observed reality takes time and experience. But it’s a sign of the American West’s intensifying weather patterns that its residents need that language at all.

Cannabis is a hell of a drug. It can treat inflammation, pain, nausea, and anxiety, just to name a few ailments. But like any drug, cannabis comes with risks, chief among them something called cannabis use disorder, or CUD.

Studies show that an estimated 9 percent of cannabis users will develop a dependence on the drug. Think of CUD as a matter of the Three C’s, “which is loss of control over use, compulsivity of use, and harmful consequences of use,” says Itai Danovitch, chair of the department of psychiatry and behavioral neurosciences at Cedars-Sinai. A growing tolerance can also be a sign.

Compared to a drug like heroin, which can hook a quarter of its users, the risk of dependency with cannabis is much lower. The symptoms of withdrawal are also far less severe: irritability and depression with cannabis, compared to seizures and hallucinations with heroin. Plus, an overdose of cannabis can’t kill you.

But as medicine and society continue to embrace cannabis, we risk losing sight of the drug’s potential to do harm, especially for adolescents and their developing brains. Far more people use cannabis than heroin, meaning that the total number of users at risk of dependence is actually rather high. And studies are showing that the prevalence of CUD is on the rise—whether that’s a consequence of increased use due to legalization, a loss of stigma in seeking treatment, or some other factor isn’t yet clear. While cannabis has fabulous potential to improve human physical and mental health, understanding and then mitigating its dark side is an essential component.

Dependence is not the same as addiction, by the way. Dependence is a physical phenomenon, in which the body develops tolerance to a drug, and then goes into withdrawal if you suddenly discontinue use. Addiction is characterized by a loss of control; you can develop a dependence on drugs, for example steroids, without an accompanying addiction. You can also become addicted without developing a physical dependence—binge alcohol use disorder, for instance, is the condition in which alcohol use is harmful and out of control, but because the use isn't daily, significant physical dependence may not have developed. “An important similarity that all addictive substances tend to have is a propensity to reinforce their own use,” says Danovitch.

Cannabis, like alcohol or opioids, can lead to both physical dependency (and the accompanying withdrawal symptoms) and addiction. But the drug itself is only part of the equation. “The risk of addiction is really less about the drug and more about the person,” says Danovitch. If it was just about the drug, everyone would get hooked on cannabis. Factors like genetics and social exposure contribute to a person’s risk.

Another consideration is dosing. Cultivators have over the decades developed strains of ever higher THC content, while the compound in cannabis that offsets THC’s psychoactive effects, CBD, has been almost entirely bred out of most strains. Might the rise in the prevalence of CUD have something to do with this supercharging of cannabis?

A new study in the journal Drug and Alcohol Dependence found that for individuals whose first use of cannabis was with a high THC content (an average of around 12 percent THC) had more than four times the risk of developing the first symptom of CUD within a year. (Two caveats being: the participants in this study had a history of other substance abuse disorders, and this looked at the first symptom of CUD, not a full-tilt diagnosis.)

Figuring out such details improves the odds that we’ll be able to detect and treat cannabis use disorder. “Early intervention is important to address substance use before it progresses to a substance use disorder,” says Iowa State University psychologist Brooke Arterberry, coauthor of the study. But to pull that off, she says, we need to better understand when and why symptoms emerge.

Those answers will likely be especially important in intervening with adolescent users, whose brains continue to develop into their mid-20s. Studies suggest that heavy cannabis use among this demographic can lead to changes in the brain. Particularly concerning is the apparent link between cannabis and schizophrenia, the onset of which can happen in the early 20s.

It’s also important to keep in mind that in the grand scheme of drugs, cannabis is nowhere near as risky as opioids. But because of prohibition, scientists have been hindered in their ability to gather knowledge of how cannabis works on the human body, and how different doses affect different people (and potentially the development of CUD). Once acquired, those insights can inform how people should be using the drug. Groups like the National Organization for the Reform of Marijuana Laws, for example, want proper labeling to keep cannabis out of the hands of children. And we need clear communication of the potency of products that can be very powerful—a chocolate bar containing 100 milligrams of THC is not meant to be consumed all at once.

“The reasons we demand proper labeling is all because of an awareness that cannabis is a mood-altering substance,” says Paul Armentano, the organization’s deputy director. “It possesses some potential level of dependence and it carries potential risk. And we believe prohibition exacerbates those potential risks, while regulation potentially mitigates those risks.” Like other substance disorders, cannabis use disorder is treatable. And as scientists develop a better understanding of CUD, we can intervene with appropriate therapies.

Cannabis has big potential to treat a range of ills. And it’ll benefit users even more once we’ve characterized its risks more precisely.