Author: GETAWAYTHEBERKSHIRES

Home / Author: GETAWAYTHEBERKSHIRES

The birth of Lulu and Nana—the first two babies believed to be born with Crispr-edited DNA—has triggered soul-searching in China as tech innovators, scientific researchers, and government bureaucrats reconcile conflicting values.

At first Chinese media celebrated Jiankui He, the scientist who last week announced he had edited the girls' DNA. Some pundits even speculated whether a Nobel prize might be in the making. But within hours the story began to flip, and the narrative that emerged across the mainland was one of caution and censure. As Chinese scientists and technologists try to speed ahead with innovative research, they are also being reined in by government officials who are mindful of ethical sensibilities in China and abroad.

LEARN MORE

The WIRED Guide to Crispr

If the news of the human embryo gene-editing experiment reached mainly science-minded readers in the US, in China its impact was far greater. On Weibo, a popular Chinese platform, 1.9 billion people viewed the hashtag “First Case of Gene-Edited HIV Immune Babies.” He’s research seemed to fit the script of the “China Dream,” a call for revolutionary scientific research and innovation issued by President Xi Jinping in 2017. This national policy aims to disrupt Western modernity with dreams of an Asian future.

Chinese scientists, however, did not jump to praise He. Some of them started issuing rebukes on social media. Zhengzhong Qu, a gene-editing scientist from China, criticized researchers who use “marketing gimmicks” to get famous, a reference to He's decision to post promotional videos of his work on YouTube. Qu also critiqued He's choice of DNA edits, which were done to confer built-in HIV resistance to the girls. “This has no meaning in clinical practice: there are already mature and effective techniques for protecting a baby from their parent’s HIV infections,” he posted on WeChat. “More risk than benefit in this case.”

Southern University of Science and Technology, where He is a professor, said it was “deeply shocked by this event” and launched an investigation. The government weighed in too: “China has banned reproductive use of gene editing in human embryos,” said Nanping Xu, the vice minister of science and technology.

To outsiders, the response to He's gene-edited babies may look like a departure from the speed-at-all-costs ethos that has seemed to characterize Chinese innovation in recent years. But a national rebranding campaign is underway. "Made in China," a label associated with cheap knock-offs, piracy, and stolen intellectual property, is being replaced with "Created in China." The gene-editing field is a prime example of how this dynamic is playing out.

Much of the action takes place in Shenzhen, the city where He works and which gave rise to the idea of "Shenzhen Speed." In the 1980s, skyscrapers were sprouting across the landscape faster there than anywhere else in the world.  Workers who aspired to better futures at iPhone factories flocked to Shenzhen from rural areas of China.

Immense speed comes with a cost. The city's own development has been hindered by an obsession with progress: In 2015, 17 buildings dramatically collapsed when a landslide of construction waste buried industrial buildings and workers’ living quarters. The risk of careless gene editing could be even greater.

Yet Shenzhen Speed has also been applied to the pace of biotechnology—the redesign of life itself. Chinese synthetic biologists have used Crispr to produce micro-pigs, humanized monkeys, and dogs with huge muscles. A Shenzhen company called BGI, which claims to be the largest genomics organization in the world, is a major player in the field of DNA sequencing. BGI has ambitions of sequencing every human and every form of life, and aspires to move "to writing, from design to synthesis."

But some of that bold language has been tempered in the wake of the Crispr babies scandal. "We need to be really careful not to do this kind of thing," says BGI’s Associate Director Xin Liu. Along with 53 other Chinese biotechnology companies, BGI issued a joint statement: "We must avoid the absolute pursuit of quick success in innovation and development, and rather work to strengthen industry self-discipline." These companies now say they aspire to "make life science and technology truly beneficial to mankind."

Gene-editing experiments on human embryos are continuing in China (and elsewhere)—but now, for research purposes only. At a summit on genome editing held in Hong Kong last week, Junjiu Huang, a biologist from Sun Yat-sen University, discussed how he had cloned human embryos and repaired a defective gene causing beta thalassemia, a blood disorder. But Huang concluded his talk with a strong condemnation of "any application of gene editing on human embryos for reproductive purposes. Such intervention is against the law, regulation, and medical ethics of China."

Huang himself sparked controversy in April 2015 when he used Crispr to create the world’s first gene-edited embryo. That incident triggered outrage internationally but got only a muted response in China. Secular Chinese ethics draws on Confucian thought, which assumes that a person becomes a person after birth, not before. So Huang was in the clear, but when He allowed edited embryos to then be born, he crossed an ethical line. Laws about gene editing were confusing in China before He’s experiment, but officials are quickly putting new rules in place. The government has now banned reproductive uses of Crispr, while saying that basic embryonic research will continue.

A bioethicist at the Chinese Academy of Social Sciences, Renzong Qiu, called on summit attendees to "protect the interest of the future child" and asked the Chinese government to develop "special regulations on applying genome editing in human reproduction." This would involve a licensing system and ethical guidelines to prevent eugenic uses of the technology.

Asian innovations in the field of biotechnology are redefining the horizons of possibility for the rest of the world. After the initial spark of fear in response to Huang's Crispr-edited embryos faded away, scientists in the United States and Europe ended up conducting similar studies.

Jennifer Doudna, the UC-Berkeley biochemist credited as one of the discoverers of Crispr, says that the current kerfuffle could play out in a similar way. "Two years from now, let’s say, if those girls are healthy…People will look back in retrospect and they will say, 'Maybe the process wasn’t correct, but the outcome could be fine.'"

Take a pencil, stretch out your arm, and let go. We all know that the pencil will fall. OK, but what about dropping a bowling ball? Is that the same thing? No wait! How about a watermelon dropped off a tall building? Why would you do that? I would do it to see it splat. Or maybe even more extreme, a human jumping out of an airplane. These examples could all be considered "falling," but not every fall is the same.

So let's get to this. Here is all the physics you need to know about falling things. Hold on to your seats. This is probably going to be more than you asked for. Don't worry, the math will (mostly) be at a simple level.

Falling without air resistance

I'm going to talk about air resistance down below. However, I want to start with the simplest case of an object falling near the surface of the Earth that has a negligible air resistance force. Really, this simplification isn't just approximately true in many cases, it's also one of the key components of the nature of science. If we want to build a scientific model (science is all about building models), the best bet is to start off with something without extra complications. If you want to model a mass on a spring, assume the spring is massless. If you want to model a cow, you have to assume it's a sphere (mandatory spherical cow joke). These simplifications are the first step to building more complicated models.

Is gravity constant?

This is one thing that comes up quite a bit. People say that if you drop two objects of different mass, they have the same gravity. OK, the first problem is the word "gravity"—what does that mean? It can mean many different things. The two most common meanings are: the gravitational force or the gravitational field.

Let's start with the gravitational field. This is a measure of the gravitational effect due to an object with mass. Since the gravitational interaction is force between two masses, you can think of this as "half" of that interaction (with just one mass). If you have an object near the surface of the Earth, then that object will have a gravitational interaction depending on the Earth's gravitational field. Near the surface of the Earth, the gravitational field is represented by the symbol g and has a value of about 9.8 newtons per kilogram.

No. The value of g is not the acceleration due to gravity. Yes, it is true that 9.8 n/kg has the equivalent units of meters per second squared. It is also true that a free falling (no air resistance) object falls with an acceleration of 9.8 m/s2—but it's still just the gravitational field. It doesn't matter what object you put near the surface of the Earth, the gravitational field due to the Earth is constant and pointing towards the center of the Earth. Note: It's not actually constant. More on that below.

What about the gravitational force? Here is a picture of two objects with different mass.

If you hold these two objects up, it should be clear that the gravitational force pulling down is not the same. The big rock has a bigger mass and a bigger gravitational force. That small metal ball has a much, MUCH smaller mass and also a much smaller gravitational force.

Yes, the gravitational force is also called the weight—those are the same things. But the mass is not the same as weight. Mass is a measure of how much "stuff" is in an object and weight is the gravitational force. Now to connect it all together. Here is the relationship between mass, weight, and gravitational field:

Technically, this should be a vector equation—but I'm trying to keep it simple. However, you can see that since g is constant, an increase in mass increases the weight.

Force and acceleration

OK, so you drop an object with mass. Once you let go, there is only one force acting on it—the gravitational force. What happens to an object with a force acting on it? The answer is that it accelerates. Oh, I know what you are thinking. You want to say that "it just falls," and maybe it falls fairly fast. That isn't completely wrong—but if you were to measure it carefully, you would see that it actually accelerates. That means that the objects downward speed increases with time.

Let's forget about falling objects for a moment. What about a small car on a horizontal, frictionless track with a fan pushing it? Like this:

If I turn on the fan and release the car, it accelerates. There are two ways I can change the acceleration of this car. I could increase the force from the fan or I could decrease the mass. With just a single force on an object in one dimension, I can write the following relationship.

This is what a force (or a net force) does to an object—it makes it accelerate. Please don't say forces make objects move. "Move" is a four letter word (that means it's bad). Saying an object "moves" isn't wrong, but it doesn't really give enough of a description. Let's just stick with saying the object accelerates.

There are many, many more things that could be said about force and motion, but this is enough for now.

Why do objects fall at the same time?

Now we can put together a bunch of stuff to explain falling objects. If you drop a bowling ball and a basketball from the same height, they will hit the ground at the same time. Oh, just in case you don't have ball experience—the bowling ball is MUCH more massive than the basketball.

Maybe they hit the ground at the same time because they have the same gravitational force on them? Nope. First, they can't have the same gravitational force because they have different masses (see above). Second, let's assume that these two balls have the same force. With the same force, the less massive one will have a greater acceleration based on the force-motion model above.

Here, you can see this with two fan carts. The closer one has a greater mass, but the forces from the fans are the same. In the end, the less massive one wins.

No, the two objects with different mass hit the ground at the same time because they have different forces. If we put together the definition of the gravitational force (on the surface of the Earth) and the force-motion model, we get this:

Since both the acceleration AND the gravitational force depend on the mass, the mass cancels. Objects fall with the same acceleration—if and only if the gravitational force is the only force.

But does the gravitational force decrease with height?

Yes. The gravitational field is not constant. I lied. Your textbook lied. We lied to protect you. We aren't bad. But now I think you can handle the truth.

The gravitational force is an interaction between two objects with mass. For a falling ball, the two objects with mass are the Earth and the ball. The strength of this gravitational force is proportional to the product of the two masses, but inversely proportional to the square of the distance between the objects. As a scalar equation, it looks like this.

A couple of important things to point out (since you can handle the truth now). The G is the universal gravitational constant. It's value is super tiny, so we don't really notice the gravitational interaction between everyday objects. The other thing to look at is the r in the denominator. This is the distance between the centers of the two objects. Since the Earth is mostly spherically uniform in density, the r for an object near the surface of the Earth will be equal to the radius of the Earth, with a value of 6,371 kilometers (huge).

So, what happens if you move 1 km above the surface of the Earth? The r" goes from 6,371 km to 6,732 km—not a big change. Even if you go ALL the way up to the altitude of the International Space Station orbit (400 km), there isn't a crazy huge change. Here, I will show you with this plot of gravitational field vs. height above the surface. Oh, and here is the python code I used to make this—just in case you want it.

For just about all "dropping object" situations, we can just assume the gravitational force is constant.

But what about air resistance?

OK, now we are getting into the fun stuff. What if you drop an object and you can't ignore the air resistance? Then we have a more complicated problem, because there are now TWO forces on the falling object. There is the gravitational force (see all the stuff above), and there is also an air resistance force. As an object moves through the air, there is a force pushing in the opposite direction of motion. This force depends on:

  • The object's speed.
  • The size of the object.
  • The shape of the object.
  • The density of the air.

The part that makes this complicated is the dependency of the air resistance on the speed of the object. Let's consider a falling object with significant air resistance. How about a ping-pong ball? When I let go of this ball, it is not moving. This means there is zero air resistance force and only the downward gravitational force. This force causes the ball to increase in speed (in the downward direction)—but once the ball is moving, there is now air resistance force pushing up. This makes the net force a little bit smaller, and thus you get a slighter increase in speed. Eventually the air drag and gravitational force have equal magnitudes. The ball then falls at a constant speed—this is called terminal velocity.

Since the net force on a falling object with air resistance isn't constant, this is a pretty tough problem. Really, the only practical (OK, not really the only way) to model this is with a numerical calculation that breaks the motion into tiny steps during which the force is approximately constant.

How about a model of a falling ping-pong ball? Here you go. Click the pencil icon to see and edit the code, and click Play to run it.

You can see that the ping-pong ball almost reaches a constant speed after dropping a distance of 10 meters. I put a "no air" object in there for reference. If you want to see what happens if you change the mass—go ahead and change the code and re-run it. It's fun.

Do heavier objects fall faster?

Now we get to the interesting question. If I drop two objects from the same height, does the heavier one hit the ground first? The answer is "sometimes." Let's look at three examples.

Drop 1: A basketball and bowling ball. Here is a slow-motion view of this actual thing.

If you ignore air resistance, then these two objects have the same acceleration, because they have different masses (see above). But why can you ignore the air resistance in this case? Looking at the basketball, it has a significant mass and size. However, it is moving fairly slowly during the fall. Even at the fastest part of this drop the force from air on the ball is super tiny compared to the gravitational force. Now, if you dropped it from a much higher starting point, the ball would be able to get up to a speed where the air drag makes it fall slower than the bowling ball.

Drop 2: A small ball and a cardboard box top. Just to be clear, the mass of the cardboard is WAY higher than the ball. Here is the drop. Sorry, the ball is hard to see since it's small.

Does the more massive object fall faster? Nope. In fact it's the lower mass that hits the ground first. It's not just mass that matters; size matters too. Even though the cardboard has a greater mass, it's surface area is also GIANT. This produces a significant air resistance force to make it hit the ground later.

Drop 3: Two pieces of paper. Two sheets of paper are pretty much the same, so they should have the same mass. However, they can hit the ground at different times.

I tricked you. Both papers have the same mass, but I crumpled one up, so they have different surface areas. The crumpled-up paper hits the ground first. It seems like this could be a good party trick. But again, it's about more than just the mass of the object.

What about different-sized skydivers?

Two people jump out of an airplane (with parachutes, because they aren't crazy). One person is large, and one person is small. Which one falls with the greatest terminal velocity? Yes, you can assume they are both in standard free fall position (same shape).

I am going to invoke the "spherical cow" principle and look at two falling spherical humans instead. Human 1 is a sphere with a radius of 1 meter (yes, that would be huge), and human 2 has a radius twice as big, at 2 meters.

How do the gravitational forces on these two spherical humans compare? Human 2 is obviously heavier. If the human density is constant, then the increase in gravitational force will be proportional to the increase in volume. If you double the radius of a sphere, you increase the volume by a factor of eight (volume is proportional to radius cubed). So human 2 has a weight eight times that of human 1.

What about the air resistance on these two humans? Again, human 2 will have a bigger area and more air resistance. If you double the radius, the cross-sectional area will be four times as much (since area is proportional to radius squared). Now you see that the bigger human will have a greater terminal velocity. Human 2 has a weight that is eight times as much, but air drag that is only four times as much as the smaller human.

Now let's take this to the extreme. An ant and an elephant jump out of a plane. The elephant is going to need a massive sized parachute, but the ant probably doesn't need anything. Since the weight-to-area ratio is super tiny for a super tiny object, the ant will have a very small terminal velocity. It can probably impact the ground with little injury. Note to my ant readers: Please be safe and don't try this in real life, in the unlikely event that I am wrong.

But size matters—especially when falling with air resistance.

I think this might be my longest blog post. Congratulations if you made it all the way to the end.

OK, I'm a little excited for the new Aquaman movie. Sure, I've been let down by DC movies before—but we also got Wonder Woman (which was awesome). Also, as a kid my mom made an Aquaman costume for me. She said it was the best costume for me since I had blonde hair (and so did Aquaman). But the real reason was that Aquaman didn't wear a mask—and masks are difficult to make. It was a great costume, thanks mom.

Now for the part where I do what I do—use physics to analyze a movie trailer. Let's get to it.

Although I don't really know what is going on, I know there is a submarine. I also know that Aquaman shoots out of the water and lands on this submarine. It is this scene that I will analyze.

Aquaman might be able to swim super fast, but once he leaves the water and enters the air there is only one force acting on him—the gravitational force that pulls straight down. Since the strength of the gravitational force depends on the mass of Aquaman AND the net force is equal to the product of mass and acceleration, the acceleration has to be equal to a constant 9.8 meters per second squared (the value of the local gravitational field).

Once in the air, Aquaman has the same acceleration as a rock that is tossed up. In the air, it's not about Aquaman, it's just about physics. Since it's physics, if I can look at his vertical motion I should be able to figure stuff out. In this situation, I can use video analysis to find his position in each frame of the video. This will give both position and time data so that I can plot his trajectory. Oh, but it's not exactly straightforward. In this scene, the camera (or virtual camera) seems to move forward. This means that the ratio of pixel size in the video to actual size will change with the position of the camera. I can compensate for this changing camera, but it takes some extra steps. If you want to do something like this yourself, check out Tracker Video Analysis. Very useful.

Here's what the motion would look like from a stationary camera.

Now for the physics. With a situation like this, there are actually three things to consider: the distance scale, the time scale (frame rate), and the vertical acceleration. In video analysis, you can pick two of these things to be known and then solve for the other one. In this case, I am going to assume the size of Aquaman and that the video plays in real time (so the frame rate is correct). Then I can plot the vertical position of Aquaman as a function of time. The plot should be a parabola. Here's what I get.

Yup. That looks like a parabola—so that's good. Even better, by fitting an equation to this data I can get a value for the vertical acceleration. It's twice the coefficient for the t2 term. That puts that acceleration at 11.8 meters per second squared. On the surface of the Earth, a free falling object would have an acceleration of 9.8 m/s2. Actually, these two values are really close—especially since I guessed the size of Aquaman in order to set the distance scale.

Why is this impressive? Let me first point out that this is most certainly a CGI scene. I doubt they got a stunt man to shoot out of the water and land on a submarine (but I've been wrong before). This means that they didn't just animate the motion of a digital Aquaman, they calculated his motion using physics. I think that's awesome.

But wait! There's more. Now that I have a trajectory for Aquaman, I can answer two questions. First, how high did he move out of the water? That's pretty easy. I can just look at the position vs. time graph and see that his change in vertical height was about 3.6 meters (almost 12 feet for Imperials). Second, how fast was he swimming in the water before he moved into the air? This is a fairly straightforward projectile motion problem. If you know the acceleration (and I do) and you know the maximum height (and I do), you can calculate the starting velocity. I'll leave the details as a homework assignment, but the answer is 8.4 m/s or about 19 mph (again, for Imperial unit users). That's pretty fast, but not the fastest fish in the ocean. The sailfish can get up to speeds of 30 m/s.

Of course, Aquaman might not be going full speed here. Why would he? He's just jumping on a submarine.

On Wednesday night, White House press secretary Sarah Huckabee Sanders shared an altered video of a press briefing with Donald Trump, in which CNN reporter Jim Acosta's hand makes brief contact with the arm of a White House Intern. The clip is of low quality and edited to dramatize the original footage; it's presented out of context, without sound, at slow speed with a close-crop zoom, and contains additional frames that appear to emphasize Acosta's contact with the intern.

And yet, in spite of the clip's dubious provenance, the White House decided to not only share the video but cite it as grounds for revoking Acosta's press pass. "[We will] never tolerate a reporter placing his hands on a young woman just trying to do her job as a White House intern," Sanders said. But the consensus, among anyone inclined to look closely, has been clear: The events described in Sanders' tweet simply did not happen.

This is just the latest example of misinformation roiling our media ecosystem. The fact that it continues to not only crop up but spread—at times faster and more widely than legitimate, factual news—is enough to make anyone wonder: How on Earth do people fall for this schlock?

To put it bluntly, they might not be thinking hard enough. The technical term for this is "reduced engagement of open-minded and analytical thinking." David Rand—a behavioral scientist at MIT who studies fake news on social media, who falls for it, and why—has another name for it: "It's just mental laziness," he says.

Misinformation researchers have proposed two competing hypotheses for why people fall for fake news on social media. The popular assumption—supported by research on apathy over climate change and the denial of its existence—is that people are blinded by partisanship, and will leverage their critical-thinking skills to ram the square pegs of misinformation into the round holes of their particular ideologies. According to this theory, fake news doesn't so much evade critical thinking as weaponize it, preying on partiality to produce a feedback loop in which people become worse and worse at detecting misinformation.

The other hypothesis is that reasoning and critical thinking are, in fact, what enable people to distinguish truth from falsehood, no matter where they fall on the political spectrum. (If this sounds less like a hypothesis and more like the definitions of reasoning and critical thinking, that's because they are.)

Several of Rand's recent experiments support theory number two. In a pair of studies published this year in the journal Cognition, he and his research partner, University of Regina psychologist Gordon Pennycook, tested people on the Cognitive Reflection Test, a measure of analytical reasoning that poses seemingly straightforward questions with non-intuitive answers, like: A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost? They found that high scorers were less likely to perceive blatantly false headlines as accurate, and more likely to distinguish them from truthful ones, than those who performed poorly.

Another study, published on the preprint platform SSRN, found that asking people to rank the trustworthiness of news publishers (an idea Facebook briefly entertained, earlier this year) might actually decrease the level of misinformation circulating on social media. The researchers found that, despite partisan differences in trust, the crowdsourced ratings did "an excellent job" distinguishing between reputable and non-reputable sources.

"That was surprising," says Rand. Like a lot of people, he originally assumed the idea of crowdsourcing media trustworthiness was a "really terrible idea." His results not only indicated otherwise, they also showed, among other things, "that more cognitively sophisticated people are better at differentiating low- vs high-quality [news] sources." (And because you are probably now wondering: When I ask Rand whether most people fancy themselves cognitively sophisticated, he says the answer is yes, and also that "they will, in general, not be." The Lake Wobegon Effect: It's real!)

His most recent study, which was just published in the Journal of Applied Research in Memory and Cognition, finds that belief in fake news is associated not only with reduced analytical thinking, but also—go figure—delusionality, dogmatism, and religious fundamentalism.

All of which suggests susceptibility to fake news is driven more by lazy thinking than by partisan bias. Which on one hand sounds—let's be honest—pretty bad. But it also implies that getting people to be more discerning isn't a lost cause. Changing people's ideologies, which are closely bound to their sense of identity and self, is notoriously difficult. Getting people to think more critically about what they're reading could be a lot easier, by comparison.

Then again, maybe not. "I think social media makes it particularly hard, because a lot of the features of social media are designed to encourage non-rational thinking." Rand says. Anyone who has sat and stared vacantly at their phone while thumb-thumb-thumbing to refresh their Twitter feed, or closed out of Instagram only to re-open it reflexively, has experienced firsthand what it means to browse in such a brain-dead, ouroboric state. Default settings like push notifications, autoplaying videos, algorithmic news feeds—they all cater to humans' inclination to consume things passively instead of actively, to be swept up by momentum rather than resist it. This isn't baseless philosophizing; most folks just tend not to use social media to engage critically with whatever news, video, or sound bite is flying past. As one recent study shows, most people browse Twitter and Facebook to unwind and defrag—hardly the mindset you want to adopt when engaging in cognitively demanding tasks.

But it doesn't have to be that way. Platforms could use visual cues that call to mind the mere concept of truth in the minds of their users—a badge or symbol that evokes what Rand calls an "accuracy stance." He says he has experiments in the works that investigate whether nudging people to think about the concept of accuracy can make them more discerning about what they believe and share. In the meantime, he suggests confronting fake news espoused by other people not necessarily by lambasting it as fake, but by casually bringing up the notion of truthfulness in a non-political context. You know: just planting the seed.

It won't be enough to turn the tide of misinformation. But if our susceptibility to fake news really does boil down to intellectual laziness, it could make for a good start. A dearth of critical thought might seem like a dire state of affairs, but Rand sees it as cause for optimism. "It makes me hopeful," he says, "that moving the country back in the direction of some more common ground isn’t a totally lost cause."

On November 8, an unimaginably fierce firestorm broke out in Northern California. Fed by dry vegetation, and fanned by northeasterly winds pouring off the Sierra Nevada Mountains, it rapidly descended on the community of Paradise, home to nearly 30,000 people.

Scott McLean, deputy chief of Cal Fire, was among the rescuers, driving through town and frantically trying to get people out. “I just left the hospital, heading up into the mess again,” he told WIRED Friday evening. “And out of the smoke comes this little old lady with a little puppy in a wheelchair just scooting down the road. These people didn't have ways to get out. So I picked her up, put her in my truck, and took her back to the hospital.”

Virtually nothing is left of Paradise—the tally is almost 19,000 structures destroyed. That makes the Camp Fire by far the most destructive wildfire in California history. It is also by far the state’s deadliest, with a death toll of at least 88 and hundreds still missing.

Something’s gone awry in California. Fires aren’t supposed to destroy entire cities—at least not since San Francisco burned in 1906. Fire codes, better fire-resistant materials, fancier firefighting equipment, and water-spewing aircraft have made it easier to put out flames. Yet in the last year, California has seen seven of its 20 most destructive wildfires ever. The Camp Fire comes just a year after the second most destructive blaze, the Tubbs Fire, struck the city of Santa Rosa in the wine country, leveling 5,500 structures and killing 22.

LEARN MORE

The WIRED Guide to Climate Change

“How could this happen?” says Stephen Pyne, a fire researcher at Arizona State University. “How did this come back? I mean, this is what we saw in the 19th century.”

You can find much of the “how” in the clash of two long-term trends, climate change and population growth. The fires aren’t going away, but likely neither are the people. So how do you keep 40 million people and counting from suffering the same fates as the residents of Paradise? And how do you protect $2.6 trillion in property?

“At some point, you don't know what to say,” adds Pyne. “It's like mass shootings; we're just sort of numbed by it and we don't seem to be able to respond.”

But respond California must.

How We Got Here

Climate change didn’t invent wildfires, but according to the data, it’s making them worse. This is largely a problem of timing. Normally by this time of the year, California has at least a little bit of rain, which helps rehydrate parched vegetation. With global warming, though, the state is in a severe drying trend in the autumn, as you can see in the graphic below.

The fast, hot winds that blow in from the east this time of year are further desiccating the vegetation, providing ample fuel for what became the Camp Fire, as well as the Woolsey Fire in Southern California. These conflagrations spew embers that fly for miles ahead, creating a multitude of new fires, which firefighters simply can’t handle.

The fires are encroaching on sprawling development in California—or, more accurately, the development is encroaching on the fires. “I think of fire as a driverless car,” says Pyne. “It's just barreling down the road integrating everything around it. It's a reaction—it takes its character from its context.”

Controlling fire, then, means changing its character by tweaking our cities and communities. Fire codes emerged as a reaction to the need to control urban development. Plain wooden shingle roofs are a no-no, for instance. Properties are subject to rules about creating defensible spaces—for example, clearing out dead plants and grass. In 2005, a new California law bumped the required clearance from 30 to 100 feet.

But fire codes only go so far. “One of the weaknesses is that it's really difficult to actually enforce that,” says Crystal Kolden, a fire scientist at the University of Idaho. “The enforcement falls on the local municipal agencies and fire departments, and oftentimes they simply don't have the resources.”

Then there are California’s struggles with fire suppression. Not letting trees burn can actually lead to bigger blazes. “There's really good scientific evidence that the tree density in the Sierra Nevada right now is much higher than it was in the pre-European settlement period,” according to Kolden. “That's very much a product of 100 years of fire suppression.”

Forests packed with more fuel than is natural also creates more conflagrations, across millions of acres of wildlands where crews should be reducing fuel loads. “You don't have a lot of resources to do that, because so much of your funding now goes to simply fighting fires year round,” she adds. “That fuel simply remains there, and will remain there until it finally burns.”

One solution is prescribed burning, a measure that California hasn't quite embraced. So far this year, the state has done around 55,000 acres of prescribed burning. The southeastern US churned through 5.5 million acres last year—100 times more. And the Southeast as a region is only about five times bigger than California.

“When you look at the southeastern US, it's not a place where we think of as having a lot of wildfires, and they really don't,” says Kolden. “That's because the southeastern US does an enormous amount of prescribed fire because their vegetation grows back so quickly.”

California’s densely packed wilderness has another thing going against it: power lines. Indeed, the prime suspect of the Camp Fire is the local utility, PG&E, which reported an electrical incident at the conflagration’s origin just before crews spotted the blaze. The utility may be to blame for last year’s Tubbs Fire as well. The question then becomes: Why on Earth are we not burying power lines?

The reason is lots of metamorphic rock—very dense stuff that forms in high pressure and high heat conditions. It’s not easy to drill through. “It becomes prohibitively expensive to bury lines and still be able to provide access to those lines,” notes Kolden. Utilities can bury them where there’s dirt, sure, but it’s still going to be very expensive.

Fire Is a People Problem

Yes, California needs to get better about fuel management. At their core, though, wildfires are a people problem. Fires in the state typically have raged either in the wilderness or in cities. Which is why we have wildland firefighters, who are lightly outfitted, as well as urban firefighters, who wear much heavier protections to enter burning buildings.

They’re also trained in radically different ways. “Structural city firefighters are really focused on saving people, and they understand a lot of the chemistry and physics of burning buildings,” says Kolden. Wildland firefighters, on the other hand, know how fire behaves in forests.

But now that wildfires are moving into populated areas, both groups are being marshaled to fight blazes where they’re not accustomed to fighting. The issue now is whether to train firefighters to handle both scenarios, or better assign resources to make sure each group fights where they're most comfortable. Kolden believes that the latter is the safer option.

Then there’s the matter of training everyone else living in California—building with better materials, clearing out defensible spaces. Let's take the town of Montecito as a model for how to bolster a community against wildfire.

A few years back, Kolden helped put together a worst-case scenario model that projected a fire driven by 60-mph winds could destroy 400 to 500 homes in Montecito, a superwealthy community on the Southern California coast. Last year, that conflagration came in the form of the Thomas Fire. But Montecito had been readying itself for decades.

“They really focused on defensible space around homes, particularly the homes that were closest up against the wildland areas,” she says. “They focused a lot on doing brush removal along their road system.” In addition, they made a mountain of information available to firefighters who might come from out of town to help battle a blaze. Basically: This is how we’ve prepared.

When the Thomas Fire hit, Montecito couldn’t rely on aircraft to drop water, but seven homes were lost—not 500. . “To me it was a model,” explains Kolden. “This community has figured out what works for them. And the homeowners 100 percent bought into it, and they're all working together to make the community resilient to fire.” To be clear, what works for a coastal enclave like Montecito might not work for a forest town like Paradise. Each community is unique, and will need its own unique solution.

Sadly, a month after it broke out, the Thomas Fire took its true toll on Montecito: Heavy rains triggered mudslides on burned-out land, killing 21 people in the area.

Still, Montecito had a solid fire evacuation plan in place, in stark contrast to what happened in Paradise. The Mercury News reports that the evacuation was absolute chaos. Many residents are saying they received no warning at all from authorities, and only made it out because they either spotted the flames or a neighbor came for them.

According to the Los Angeles Times, authorities didn't issue the first evacuation order until the fire had already reached Paradise, 90 minutes after the fire was first spotted at its origin east of town at about 6:30 am. Even then, they reportedly didn't initiate a full-scale evacuation—fearing a repeat of a 2008 fire evacuation, in which roads became clogged. The full-scale evacuation order didn't land until more than an hour after that, at 9:17 am.

By then, the Camp Fire was burning Paradise. Residents fleeing at the last minute crammed the few escape routes. Some abandoned their vehicles to escape on foot. Not everyone could. Paradise is a retirement community; its elderly need time and sometimes special arrangements to clear out of their homes, let alone get out of town.

“It's what I feared,” says Thomas Cova, who studies wildfire evacuations at the University of Utah. “It looks like we're repeating history again from the Tubbs Fire last year.” During that disaster, authorities opted not to send an alert, fearing they’d cause alarm and hamper emergency efforts. That fire claimed 22 lives.

McLean, of Cal Fire, says that his organization immediately notified the Butte County Sheriff’s Department when they spotted the blaze. The sheriff is then in charge of sending out an alert. “We have a warning failure of really epic proportions,” says Cova.

A particularly powerful tool is the Amber Alert system, but Paradise residents say they didn’t receive any warning through it. (The mayor of Paradise says the town did have an evacuation plan that was practiced in 2016.)

As Montecito proved last year, it doesn’t have to be this way. “We know enough to stop this,” says Pyne. “We knew enough decades ago.”

Fueled by climate change and fierce winds and dry vegetation, fires will keep licking at places like Paradise. The future of cities will depend on how serious they get about fuel management and building codes—and in case that fails, evacuation procedures. To that end, in September, California Governor Jerry Brown signed legislation that bolsters wildfire prevention efforts.

California will have to spend billions upon billions to fix this problem, but that’s a tiny investment compared with what it stands to lose.

The Parker Solar Probe just earned the title of the fastest-moving manmade object. Launched by NASA this past August, this robotic spacecraft is currently very, very near the Sun, on its way to probe the outer corona of our local star.

OK, I know you have questions. Let me just jump right into it.

How fast is it going?

According to NASA, its current speed is 153,545 mph (or 68.6 kilometers per second). But really, that just means super fast. It's nearly impossible to imagine something that fast when the fastest man-made stuff on Earth is perhaps a rail gun projectile at about 2.52 km/s. That means the Parker Solar Probe is traveling at a speed that is 27 times faster that the fastest thing we've got down here. Zoom fast.

What does this have to do with the speed of light?

Of course, light is even faster. Light has a speed of about 3 x 108 m/s (300,000 km/s). But why does that matter? You can't get an object up to (or greater than) the speed of light. Why? Let's start with an example. Suppose I have a force of 1 Newton and I push on an object at rest with a mass of 1 kg for 1 second (I'm using easy numbers). The momentum principle says that the momentum is the product of mass and velocity. Also, the force applied to an object tells us the rate of change of momentum. This means a 1 Newton force for 1 second gives a CHANGE in momentum of 1 kg*m/s (the change part is important).

This mostly works for super high speeds. The momentum principle still works as long as you use a better definition of momentum. It should look like this (in one dimension).

In this expression, the p is momentum (don't ask why) and the c represents the speed of light. Notice that as the velocity gets closer to the speed of light, you get a much smaller increase in speed for the same force. In fact, if the velocity was equal to the speed of light you would be dividing by zero—which is generally a bad thing.

Just to be clear, there aren't two models for momentum. You can always use the more complicated version of momentum. Try this: Calculate the momentum of a baseball with a mass of 0.142 kg and a speed of 35 m/s. First do this with the simple formula of mass times velocity and you get 497 kg*m/s. Now try it with the more complicated formula. Guess what? You get the same thing. I recommend using the simple formula whenever possible.

Just how fast is the Parker Solar Probe going compared to the speed of light? If you divide the probe's speed by the speed of light you get 0.00023. Actually, we can write this as 0.00023c (where c is the speed of light). It's fast, but it's not light-speed fast.

Why is this speed relative to the Sun?

You will probably see something about the speed of the Parker Solar Probe labeled as the heliocentric velocity. What's the deal with that?

On Earth, this is rarely an issue. If you are driving your car at 55 mph, everyone understands that we are measuring this velocity with respect to the stationary ground. In fact, velocities only really make sense when measured relative to some reference frame. On the Earth, the obvious reference frame is the ground.

What if you didn't want to use the Earth's surface as a reference frame? Imagine a police officer pulling you over in your car and saying "oh hello, I clocked you at 67,055 mph." That could indeed be true since the Earth isn't stationary. In order to orbit the Sun, it has to travel with a speed of 67,000 mph to make it all the way around the Sun in one year. Yes, that's fast (with respect to the Sun).

If you wanted to measure the speed of the Parker Solar Probe with respect to the Earth, you would have a tough time because you wouldn't just have one value. As the probe moves closer to the Sun, the probe and the Earth can be moving in different directions. So even though the speed relative to the Sun could stay constant, its speed relative to the Earth would change since the Earth is turning in its orbit around the Sun.

If you really want to get crazy, you could use some other reference frame—like the galactic center. But let's not get crazy.

How does the probe break its own speed record?

The probe will go even faster than it is already traveling. NASA projects a slightly faster speed as it gets closer to the Sun in 2024. But why does it get faster when it is closer to the Sun?

There are two key ideas here. The first is the gravitational force. This is an attractive force between the Sun and the probe. The magnitude of this force increases as the distance between them decreases. Oh, don't worry—you can't notice an increase in gravitational force as you move closer to the ground. Even if you moved a vertical distance of 1000 meters, this is insignificant compared to the size of the Earth with a radius of 6.37 million meters.

The other part of the problem is circular motion. Imagine the space probe traveling in a circular orbit (which isn't actually true). In order for an object to move in a circle, there needs to be a force pulling it towards the center of the circle. The magnitude of this sideways force is proportional to the square of the object's velocity, but inversely proportional to the radius of the circle. Putting the gravitational force and the required circular force together, I get the following expression for the orbital velocity.

In this expression, Ms is the mass of the Sun and G is the gravitational constant. But the main point is that the velocity increases as the radius decreases. It's just physics.

Homework

If you want some fun physics homework questions, I have you covered. Here you go.

  • Calculate the kinetic energy of the Parker Solar Probe at its current velocity (with respect to the Sun). Yes, you need to look up or estimate the mass of the probe.
  • Suppose you were going to get the probe up to speed by having a human on a stationary bike connected to a generator. The human can produce 50 Watts for as long as you like (maybe it's two humans who take turns). How long would it take to get the probe up to its current speed?
  • The probe has been in space for about 3 months (let's go with 3 months). Suppose that the probe was traveling at a constant speed this whole time (use its current velocity). Create a plot of speed vs. time as measured relative to the Earth. Remember, in 3 months the Earth changes direction.
  • How many candy bars would the probe need to "eat" to get to its current speed. Yes, I'm assuming the probe eats. This might be useful too.

This story originally appeared on Grist and is part of the Climate Desk collaboration.

When a blue-hulled cargo ship named Venta Maersk became the first container vessel to navigate a major Arctic sea route this month, it offered a glimpse of what the warming region might become: a maritime highway, with vessels lumbering between Asia and Europe through once-frozen seas.

Years of melting ice have made it easier for ships to ply these frigid waters. That’s a boon for the shipping industry but a threat to the fragile Arctic ecosystem. Nearly all ships run on fossil fuels, and many use heavy fuel oil, which spews black soot when burned and turns seas into a toxic goopy mess when spilled. Few international rules are in place to protect the Arctic’s environment from these ships, though a proposal to ban heavy fuel oil from the region is gaining support.

“For a long time, we weren’t looking at the Arctic as a viable option for a shortcut for Asia-to-Europe, or Asia-to-North America traffic, but that’s really changed, even over the last couple of years,” says Bryan Comer, a senior researcher with the International Council on Clean Transportation’s marine program. “It’s just increasingly concerning.”

Venta Maersk departed from South Korea in late August packed with frozen fish, chilled produce, and electronics. Days later, it sailed through the Bering Strait between Alaska and Russia, before cruising along Russia’s north coast. At one point, a nuclear icebreaker escorted Venta Maersk through a frozen Russian strait, then the container vessel continued to the Norwegian Sea. It’s expected to arrive in St. Petersburg later this month.

The trial voyage wouldn’t have been possible until recently. The Arctic region is warming twice as fast as the rest of the planet, with sea ice, snow cover, glaciers, and permafrost all diminishing dramatically over recent decades. In the past, only powerful nuclear-powered icebreakers could forge through Arctic seas; these days, even commercial ships can navigate the region from roughly July to October—albeit sometimes with the help of skilled pilots and icebreaker escorts.

Russian tankers already carry liquefied natural gas to Western Europe and Asia. General cargo vessels move Chinese wind turbine parts and Canadian coal. Cruise liners take tourists to see surreal ice formations and polar bears in the Arctic summer. Around 2,100 cargo ships operated in Arctic waters in 2015, according to Comer’s group.

“Because of climate change, because of the melting of sea ice, these ships can operate for longer periods of time in the Arctic,” says Scott Stephenson, an assistant geography professor at the University of Connecticut, “and the shipping season is already longer than it used to be.” A study he co-authored found that, by 2060, ships with reinforced hulls could operate in the Arctic for nine months in the year.

Stephenson says that the Venta Maersk’s voyage doesn’t mean that an onrush of container ships will soon be clogging the Arctic seas, given the remaining risks and costs needed to operate in the region. “It’s a new, proof-of-concept test case,” he says.

Maersk, based in Copenhagen, says the goal is to collect data and “gain operational experience in a new area and to test vessel systems,” representatives from the company wrote in an email. The ship didn’t burn standard heavy fuel oil, but a type of high-grade, ultra-low-sulfur fuel. “We are taking all measures to ensure that this trial is done with the highest considerations for the sensitive environment in the region.”

Sian Prior, lead advisor to the HFO-Free Arctic Campaign, says that the best way to avoid fouling the Arctic is to ditch fossil fuels entirely and install electric systems with, say, battery storage or hydrogen fuel cells. Since those technologies aren’t yet commercially viable for ocean-going ships, the next option is to run ships on liquefied natural gas. The easiest alternative, however, is to switch to a lighter “marine distillate oil,” which Maersk says is “on par with” the fuel it’s using.

But many ships still run on cheaper heavy fuel oil, made from the residues of petroleum refining. In 2015, the sludgy fuel accounted for 57 percent of total fuel consumption in the Arctic, and was responsible for 68 percent of ships’ black carbon emissions, according to the International Council on Clean Transportation.

Black carbon wreaks havoc on the climate, even though it usually makes up a small share of total emissions. The small dark particles absorb the sun’s heat and directly warm the atmosphere. Within a few days, the particles fall back down to earth, darkening the snow and hindering the snow’s ability to reflect the sun’s radiation—resulting in more warming.

When spilled, heavy fuel oil emulsifies on the water’s surface or sinks to the seafloor, unlike lighter fuels which disperse and evaporate. Clean-up can take decades in remote waters, as was the case when the Exxon Valdez crude oil tanker slammed into an Alaskan reef in 1989.

“It’s dirtier when you burn it, the options to clean it up are limited, and the length it’s likely to persist in the environment is longer,” Prior says.

In April, the International Maritime Organization, the U.N. body that regulates the shipping industry, began laying the groundwork to ban ships from using or carrying heavy fuel oil in the Arctic. Given the lengthy rulemaking process, any policy won’t likely take effect before 2021, Prior says.

One of the biggest hurdles will be securing Russia’s approval. Most ships operating in the Arctic fly Russian flags, and the country’s leaders plan to invest tens of billions of dollars in coming years to beef up polar shipping activity along the Northern Sea Route. China also wants to build a “Polar Silk Road” and redirect its cargo ships along the Russian route.

Such ambitions hinge on a melting Arctic and rising global temperatures. If the warming Arctic eventually does offer a cheaper highway for moving goods around the world, Comer says, “then we need to start making sure that policies are in place.”

Way, way out at the cold, dark edges of the solar system—past the rocky inner planets, beyond the gas giants, a billion miles more remote than Pluto—drifts a tiny frozen world so mysterious, scientists still aren't entirely sure if it's one world or two.

Astronomers call it Ultima Thule, an old cartography term meaning "beyond the known world." Its name is a reference to its location in the Kuiper Belt, the unexplored "third zone" of our solar system populated by millions of small, icy bodies.

Numerous though they are, no Kuiper Belt object has ever been seen up close. NASA's two Voyager probes—which traversed the third zone decades ago—might have spied a glimpse of one had they been equipped with the right instruments, except that the Kuiper Belt hadn't even been detected yet. On New Years Eve, for the first time, NASA will get a chance at some facetime with one of these enigmatic space rocks.

At 9:33 pm PST, 33 minutes past midnight on the East Coast, the agency's New Horizons probe will make a close pass of Ultima Thule, making it the most distant object ever to be visited by a spacecraft.

Astronomers have almost no idea what awaits them. “What’s it going to look like? No one knows. What’s it going to be made of? No one knows. Does it have rings? Moons? Does it have an atmosphere? Nobody knows. But in a few days we’re going to open that present, look in the box, and find out,” says Alan Stern, the mission's primary investigator.

New Horizons has traveled for 13 years and across 4 billion miles to reach this point, and the probe looks to be in fine shape: Mission planners confirmed earlier this month that it will pass within 2,200 miles of Ultima Thule after determining that large objects, like moons, and smaller ones, like dust, were unlikely to pose a threat to the spacecraft as it blazed past in excess of 31,000 miles per hour. ("When you're traveling that fast, hitting something even the size of a grain of rice could destroy the spacecraft," says Hal Weaver, the mission's project scientist.)

New Horizons' trajectory will carry it three times closer to Ultima Thule than it did Pluto, which it shot past in the summer of 2015. The photos New Horizons beamed back then were the most detailed ever captured not just of the former planet, but the outer solar system. Because of its proximity, the images the probe collects of Ultima Thule will be more detailed still, and from a billion miles deeper in space. "Pluto blew our doors off," Stern says, "but now we're heading for something much more wild and woolly."

Stern and his team discovered the object in 2014 using the Hubble Space Telescope, while searching the sky for places New Horizons could visit after its brief encounter with Pluto. In those first images, Ultima was just a glob of pixels that shifted every few minutes against a backdrop of unmoving stars.

In more recent images, captured by New Horizons' Long Range Reconnaissance Imager, the object still appears as little more than a speck in a sea of much brighter specks. "When you search for it, it looks like stars puked all over the imagery," says planetary scientist Amanda Zangari, who spent most of December collecting Ultima Thule's position and brightness measurements. "To even see the darn thing, you need to stack multiple images, account for the distortion between them, and subtract the stars." At 1/100th the diameter of Pluto, and 1/10,000th its brightness, Ultima Thule makes for a more elusive quarry than the erstwhile planet.

Through their observations, the team has determined that Thule (whose official designation is 2014 MU69) is either two separate objects orbiting one another at close range, or a pair of bodies that gravitated toward each other til they merged, forming the two lobes of something astronomers call a contact binary. Either way, the data suggests Ultima is no more than 20 miles in diameter, dark as reddish dirt, and well within range of New Horizons' fuel supply.

It is also, in all likelihood, very, very old. Which is precisely why astronomers are so excited to study it up close.

Kuiper Belt objects like Ultima Thule are thought to be remnants of the solar system's formation—the cosmic refuse that remained after the planets came into being some 4.6 billion years ago. That makes them an enticing destination for astronomers: Many of those objects aren't just ancient, they're also, astronomers think, perfectly preserved by temperatures approaching absolute zero. (So far removed is Ultima Thule from the sun's warming rays, that our parent star would appear to an observer on its surface about the size that Jupiter does from here on Earth). NASA's plan to visit one, map its features, study its makeup, detect its atmosphere (if one exists), and search it for satellites and rings is more than a flyby mission. It's an archaeological expedition of cosmic scale and consequence.

New Horizons will investigate Ultima with the same suite of instruments it used to study the Pluto system back in 2015. A trio of optical devices will capture images of the object in color and black-and-white, map its composition and topography, and search for gasses emanating from its surface. Two spectrometers will also search for charged particles in Ultima Thule's environs; a radio-science instrument will measure its surface temperature; and a dust counter will detect flecks of interplanetary debris. Fully loaded, the piano-sized probe weighs a hair over 1,000 pounds and requires less power than a pair of 100-watt light bulbs to operate its equipment.

After its New Years Eve flyby, New Horizons will continue on its path out of the Kuiper Belt. But the third zone is vast. Even traveling at nearly nine miles per second, it'll take the spacecraft a decade to traverse it and enter interstellar space. Stern and his colleagues will use that time to search for yet another target—one even further from the sun than Ultima Thule, and shrouded, perhaps, in still more mystery. It's a tantalizing prospect for the New Horizons team. "To visit a place you know nothing about," Weaver says. "That's exploration at its finest."

Learn More About the New Horizons Mission

  • In 2015, New Horizons zipped past Pluto, giving astronomers their closest look yet at the erstwhile planet and its moons.
  • NASA's probe traveled some 3 billion miles to reach Pluto. It's traveled another billion, still, to reach Ultima Thule.
  • How does New Horizons beam all its observations back to Earth, when it's so far away? Very slowly.

Related Video

Science

Mission to Pluto: The Story Behind the Historic Trip

It’s taken nine years to get there, but on July 14, 2015 the New Horizons spacecraft will finally fly by its destination: Pluto. Find out how the historic mission to Pluto happened from the people who helped launch it.

This story originally appeared on Grist and is part of the Climate Desk collaboration.

You probably didn’t give much thought to how exactly you loaded this webpage. Maybe you clicked a link from Twitter or Facebook and presto, this article popped up on your screen. The internet seems magical and intangible sometimes. But the reality is, you rely on physical, concrete objects—like giant data centers and miles of underground cables—to stay connected.

All that infrastructure is at risk of being submerged. In just 15 years, roughly 4,000 miles of fiber-optic cables in US coastal cities could go underwater, potentially causing internet outages.

That’s the big finding from a new, peer-reviewed study from the University of Wisconsin-Madison and the University of Oregon. To figure out how rising seas could affect the internet’s physical structures, researchers compared a map of internet infrastructure to the National Oceanic and Atmospheric Administration’s predictions for sea-level rise near US coasts.

In New York City, about 20 percent of fibers distributed throughout the city are predicted to flood within 15 years—along with 32 percent of the fibers that connect the metropolis to other cities and 43 data centers. The research suggests that Seattle and Miami are especially vulnerable, along with many coastal areas.

“All of this equipment is meant to be weather-resistant—but it’s not waterproof,” says Paul Barford, UW-Madison professor of computer science and a coauthor of the paper. Much of the system was put into place in the ’90s without much consideration of climate change, he says.

On top of that, much of the internet’s physical infrastructure is aging. Paul Barford says a lot of it was designed to last only a few decades and is now nearing the end of its lifespan.

That is, if the floods don’t get to it first. While 15 years may seem shockingly soon, we’re already seeing more high tide flooding, points out Carol Barford (married to the aforementioned Paul), a coauthor on the paper and director of UW-Madison’s Center for Sustainability and the Global Environment. We’re seeing outages related to extreme weather, too: Hurricane Irma, for example, left over a million people without internet access.

It’s hard to predict exactly what would happen inland when coastal infrastructure floods—but the internet is an interconnected system, so damage in one place could affect others. For those inland, it’s possible that coastal flooding could cause a total internet connection outage, or issues in connecting to particular web pages and services.

Still, there’s a lot of research to be done. “We need to better understand the scope of the problem to create good solutions,” says Ramakrishnan Durairajan, a University of Oregon assistant professor of computer and information sciences and the paper’s lead author. Further studies could examine the effects of increased extreme weather on the system, he says, as well as ways to better engineer web traffic in the face of floods or other climate-induced disasters.

The takeaway, Carol Barford says: “If we want to be able to function like we expect every day, we’re going to have to spend money and make allowances and plans to accommodate what’s coming.”

Related Video

Science

King Tides Show Us How Climate Change Will Threaten Coastal Cities

Seawall-topping king tides occur when extra-high tides line up with other meteorological anomalies. In the past they were a novelty or a nuisance. Now they hint at the new normal, when sea level rise will render current coastlines obsolete.

Are Diplomas in Your DNA?

March 20, 2019 | Story | No Comments

Last week, scientists published the biggest-ever study of the genetic influence on educational attainment. By analyzing the DNA of 1.1 million people, the international team discovered more than a thousand genetic variants that accounted—in small part—for how far a person gets through school. It made a lot of people nervous, as they imagined how this new research could be applied in Gattaca-esque testing tools.

But those concerns aren’t new—and neither is the kind of research published last Monday. This sort of correlational work for educational attainment has been in progress since at least 2011. And there is already a consumer product on the market that draws from that early research.

Log onto the Helix DNA marketplace—it’s like the app store for consumer genetic products—and the candy-colored website invites you to “Get started with DNA.” Clicking through takes you to one of Helix’s featured products: the DNAPassport. It was developed by Denver-based HumanCode, which Helix acquired in June, and lets users explore where their ancestors come from, whether they might be sensitive to gluten or lactose, and more than 40 other genetically-influenced traits. One of them is something called “academic achievement.”

It’s based on a single genetic variant called rs11584700, near a gene called LRRN2 that codes for a protein involved in neuron signaling. And it was discovered by the same consortium that published the massive genetic analysis on Monday.

Social scientists’ first attempts at unearthing links between genes and people’s behaviors, in the mid-2000s, were plagued by small samples, weak methods, and unreproducible results. So to save the field from itself, a behavioral economist named Daniel Benjamin, at the University of Southern California, borrowed an idea from medical geneticists. He convinced research organizations from around the world to pool their data, giving them enough power to run something called a Genome-Wide Association Study, or GWAS. The first thing they looked at was how long people stay in school.

In 2011, Benjamin founded the Social Science Genetic Association Consortium, along with David Cesarini and Philipp Koellinger. Their goal was to find a reliable measure of heritable influence on education attainment so that other researchers could control for genetics in their experiments, the same way they’d control for socioeconomic status or zip code. Since then, the SSGAC has uncovered more than 1,000 genetic variations associated with years of schooling. Benjamin’s team has gone out of its way to make it clear that each one exerts only a teeny tiny bit of influence—three additional weeks of education, max—and that even collectively, the variants are not powerful enough to predict an individual’s academic achievement.

But that’s not stopping companies from using their research to sell people insights into their degree-seeking behavior. Based on one of the consortium’s earlier papers, and a second one using data from the UK’s National Child Development Study, HumanCode added the academic achievement feature to its DNAPassport app last December. Users who’ve got a pair of G’s or an A and a G at that location will learn that those genotypes are “associated with slightly higher educational attainment in Europeans.” If your spit turns up an AA, well, no higher ed association for you.

“I’m not afraid to share that my own academic achievement SNP is not the desirable one,” says Chris Glodé, formerly the CEO of HumanCode, now a chief product officer at Helix, as he sends over a screenshot of his “Normal,” aka AA genotype. He says HumanCode made the decision to add the feature after seeing educational attainment show up on a number of third-party sites like GenePlaza, Genome Link, and Promethease. These are websites where people can go to upload the genetic data files they get from spit testing kits like 23andMe, Ancestry, and Helix, to further explore their DNA. “A lot of people are using these third-party services, so the idea that we’re going to prevent people from finding out this information for themselves seems not only unlikely, but also misaligned with our mission,” says Glodé. “The question then became, can we present this information responsibly?”

HumanCode sold DNAPassport on Helix’s marketplace even before the company was acquired. So its product has been subject to Helix’s scientific evaluation process since late 2017: The company requires that any variants used in a product are based on studies with more than 2,000 people whose results have been replicated. In the case of academic achievement, Helix also required that HumanCode list it with a disclaimer of sorts, called a LAB designation. “The research supporting the genetics underlying this trait require more work,” reads the site’s language. “Traits with the LAB designation may have limited scientific support from studies that are small/preliminary or lacking independent replication. Additionally, some traits with LAB designations have valid and replicated associations, but we want to learn more about how genetics influences the trait.”

About a quarter of DNAPassport’s traits fall under LAB designation. They’re all grouped together in the “Just for Fun” category of traits, “to reinforce that this information shouldn’t be used for making lifestyle decisions,” says Glodé. Sometimes, when new and better research comes out, traits get upgraded. If he were still the CEO of HumanCode and it was still an independent company, his team would probably update the academic achievement trait with the latest variants. But he says Helix has no plans to do that. Instead, it’s focused on encouraging developers to bring new products to its platform, including tests that might include educational attainment.

“Provided the context was appropriate, that a product was intended to be informational and educational, I think Helix would be open to it,” says Glodé. “But they would likely still require the results to presented the way we did in DNAPassport, providing additional qualifications that the research isn’t as well established as for traits like height and eye color.” And that the results don’t apply beyond people of European descent. Like the vast majority of genetic population studies, the SSGAC’s research cohort is overwhelmingly European, and the variants identified have little predictive power for non-European populations.

Benjamin—the SSGAC co-founder—says relying on his study’s genetic score to predict educational attainment for an individual would be inaccurate. Using just a single variant, even more so. “If companies want to do this I would be concerned that they’re accurately communicating the information,” says Benjamin. “It’s not just a matter of disclosing the limitations of the predictive power.” Along with other members of the consortium and its advisory board, Benjamin spent hundreds of hours writing a 27-page FAQ to accompany their paper, explicitly because of the potential for misinterpretation. He credits companies like 23andMe that use a rating system to communicate how confident users can be in the results.

While 23andMe has played a significant role in supporting research into the genetics of educational attainment—the company contributed deidentified data on 365,536 of its research-consented customers to the SSGAC’s latest study—it does not at this time offer a report for academic achievement. Nor does it have any educational attainment reports in the product pipeline, according to a company spokesperson.

Remember, no one knows exactly how these genes create a tendency toward degree-seeking behavior. They could influence how fast neurons fire, or they could make sitting at a hard wooden desk for eight hours not feel like torture. Maybe they remove the stigma of asking for extra time on tests or assignment extensions. Researchers will need to do a lot more work to figure out the why. But when they do, you can be sure someone will try to sell it to you.