Author: GETAWAYTHEBERKSHIRES

Home / Author: GETAWAYTHEBERKSHIRES

Get Ready for a Schooling in Angular Momentum

March 20, 2019 | Story | No Comments

It's almost always the last topic in the first semester of introductory physics—angular momentum. Best for last, or something? I've used this concept to describe everything from fidget spinners to standing double back flips to the movement of strange interstellar asteroids.

But really, what the heck is angular momentum?

Let me start with the following situation. Imagine that there are two balls in space connected by a spring. Why are there two balls in space? I don't know—just use your imagination.

Not only are these balls connected by a spring, but the red ball has a mass that is three times the mass of the yellow ball—just for fun. Now the two balls are pushed such that they move around each other—just like this.

Yes, this is a numerical calculation. If you want to take a look at the code and play with it yourself (and you should), here it is. If you want all the details about how to make something like this, take a look at this post on the three body problem.

When we see stuff like these rotating spring-balls, we think about what is conserved—what doesn't change. Momentum is a good example of a conserved quantity. We can define momentum as:

Let me just make a plot of the total momentum as a function of time for this spring-ball system. Since momentum is a vector, I will have to plot one component of the momentum—just for fun, I will choose the x-coordinate. Here's what I get.

In that plot, the red curve is the x-momentum of the red (heavier) ball and the blue curve is for the yellow ball (yellow doesn't show up in the graph very well). The black line is the total momentum. Notice that as one object increases in momentum, the other object decreases. Momentum is conserved. You could do the same thing in the y-direction or the z-direction, but I think you get the idea.

What about energy? I can calculate two types of energy for this system consisting of the balls and the spring. There is kinetic energy and there is a spring potential energy:

The kinetic energy depends on the mass (m) and velocity (v) of the objects where the potential energy is related to the stiffness of the spring (k) and the stretch (s). Now I can plot the total energy of this system. Note that energy is a scalar quantity, so I don't have to plot just one component of it.

The black curve is again the total energy. Notice that it is constant. Energy is also conserved.

But is there another conserved quantity that could be calculated? Is the angular velocity conserved? Clearly it is not. As the balls come closer together, they seem to spin faster. How about a quick check, using a plot of the angular velocity as a function of time.

Nope: Clearly, this is not conserved. I could plot the angular velocity of each ball—but they would just have the same value and not add up to a constant.

OK, but there is something else that can be calculated that will perhaps be conserved. You guessed it: It's called the angular momentum. The angular momentum of a single particle depends on both the momentum of that particle and its vector location from some point. The angular momentum can be calculated as:

Although this seems like a simple expression, there is much to go over. First, the L vector represents the angular momentum—yes, it's a vector. Second, the r vector is a distance vector from some point to the object and finally the p vector represents the momentum (product of mass and velocity). But what about that "X"? That is the cross product operator. The cross product is an operation between two vectors that produces a vector result (because you can't use scalar multiplication between two vectors).

I don't want to go into a bunch of maths regarding the cross product, so instead I will just show it to you. Here is a quick python program showing two vectors (A and B) as well as A x B (you would say that as A cross B).

You can click and drag the yellow A vector around and see what happens to the resultant of A x B. Also, don't forget that you can always look at the code by clicking the "pencil" icon and then click the "play" to run it. Notice that A X B is always perpendicular to both A and B—thus this is always a three-dimensional problem. Oh, you can also rotate the vectors by using the right-click or ctrl-click and drag.

But now I can calculate (and plot) the total angular momentum of this ball-spring system. Actually, I can't plot the angular momentum since that's a vector. Instead I will plot the z-component of the angular momentum. Also, I need to pick a point about which to calculate the angular momentum. I will use the center of mass for the ball-spring system.

There are some important things to notice in this plot. First, both the balls have constant z-component of angular momentum so of course the total angular momentum is also constant. Second, the z-component of angular momentum is negative. This means the angular momentum vector is pointing in a direction that would appear to be into the screen (from your view).

So it appears that this quantity called angular momentum is indeed conserved. If you want, you can check that the angular momentum is also conserved in the x and y-directions (but it is).

But wait! you say. Maybe angular momentum is only conserved because I am calculating it with respect to the center of mass for the ball-spring system. OK, fine. Let's move this point to somewhere else such that the momentum vectors will be the same, but now the r-vectors for the two balls will be something different. Here's what I get for the z-component of angular momentum.

Now you can see that the z-component for the two balls both individually change, but the total angular momentum is constant. So angular momentum is still conserved. In the end, angular momentum is something that is conserved for situations that have no external torque like these spring balls. But why do we even need angular momentum? In this case, we really don't need it. It is quite simple to model the motion of the objects just using the momentum principle and forces (which is how I made the python model you see).

But what about something else? Take a look at this quick experiment. There is a rotating platform with another disk attached to a motor.
What happens with the motor-disk starts to spin? Watch. (There's a YouTube version here.)

Again, angular momentum is conserved. As the motor disk starts to spin one way, the rest of the platform spins the other way such that the total angular momentum is constant (and zero in this case). For a situation like this, it would be pretty darn difficult to model this situation with just forces and momentum. Oh, you could indeed do it—but you would have to consider both the platform and the disk as many, many small masses each with different momentum vectors and position vectors. It would be pretty much impossible to explain with that method. However, by using angular momentum for these rigid objects, it's not such a bad physics problem.

In the end, angular momentum is yet another thing that we can calculate—and it turns out to be useful in quite a number of situations. If you can find some other quantity that is conserved in different situations, you will probably be famous. You can also name the quantity after yourself if that makes you happy.

Related Video

Science

Science of Sport: Gymnastics

Charlotte Drury, Maggie Nichols, and Aly Raisman talk to WIRED about the skill, precision, and control they employ when performing various Gymnastic moves and when training for the Olympics.

Click:高仿奢侈品包包

Trailers. Casting announcements. Development snarls. Box-office battles. Now that the entertainment world has a news churn to rival cable news, it's impossible to keep tabs on everything. May we introduce, then, The Monitor, Wired’s new twice-weekly round-up of what you might have missed in the hyper-drive-fast world of popular culture. (Yes, we've used the name before—for both a video series and a podcast—but we just can't stay away. It works on two levels!) In today’s inaugural edition: The Walking Dead says goodbye to Rick (sort of), the Merc tames his Mouth for a holiday cash grab, Bohemian Rhapsody is the savior of the box-office universe, and AMC’s anti-MoviePass plan expands–but at a cost. Come for the terrible puns, stay for the stuff that makes you a more informed fan. Or vice versa.

Grimes is Going Out With Elan

The Walking Dead star Andrew Lincoln may have made his exit from the long-running (and recently ratings-challenged zombie series on Sunday night, but he’ll soon be back: AMC has announced a trio of spin-off films featuring Lincoln’s character, the beleaguered dead-hunter Rick Grimes. The network hasn’t landed on a premiere date for the films, the first of which will reportedly begin production next year, though AMC’s Scott M. Gimple told The Hollywood Reporter they’re part of an effort to keep Dead alive for years to come: “We're going to be doing specials, [and] new series are quite a possibility…we're going to introduce new characters and new situations” (as for more specific plans, right now, we’re all on a Negan-know basis). Meanwhile, in an interview with The New York Times, Lincoln addressed the death of Glenn, the departure of original showrunner Frank Darabont, and the extreme measures he takes on-set before filming begins: “I don’t care what it takes to get to a place. If I’ve got snot coming out of my mouth, that’s the way it’s gonna be.”

Deadpool Cleans Up His Act

Deadpool 2 will return to theaters next month–albeit with a bit less shooting and swearing. A newly edited PG-13 edition of the movie, titled Once Upon a Deadpool, features several new sequences–all reportedly filmed in one day–featuring star Ryan Reynolds alongside special guest Fred Savage, who will spoof his turn in the 1987 hit The Princess Bride. The revamped film's theatrical release will benefit the charity Fuck Cancer, but it will also give Disney–which picked up the Deadpool franchise in its recent acquisition of 20th Century Fox–access to a family-friendly version of Reynolds' hit, one that could potentially play in China, and perhaps be added to Disney's forthcoming streaming service. It's a smart plan, as long as no one Fox it up.

Fat-Bottom-Lined Box Office

The Queen biopic Bohemian Rhapsody, starring Rami Malek as toothsome frontman Freddie Mercury, earned $50 million in the U.S. in its opening weekend, overcoming some very, very frightening reviews, not to mention a messy production. Disney’s equally hard-to-make The Nutcracker and the Four Realms–which cycled through a pair of directors, or roughly two per realm–opened behind Rhapsody with a disappointing $20 million, and ending any franchise hopes for the studio. And Tiffany Haddish’s fourth(!) movie of 2018, the Tyler Perry-directed romantic comedy Nobody’s Fool, made $14 million, proving yet again her draw as a big-screen comedy star–a Hollywood rarity these days, and one that puts her in a realm of her own.

AMC What They Did There?

The theater chain announced that its monthly Stubs A-List plan–think of it like MoviePass, except without all of the dubious financing or wiggy availability–will soon have more than half a million subscribers. But the company also noted that the service’s price will increase to as much as $23.95 a month in some states. That’s far higher than MoviePass, which at one point was less than seven bucks a month. But it’s a small price to pay for the opportunity to watch a Gerard Butler submarine movie up to twelve times in a row!

On Monday night, residents of the Los Angeles neighborhoods of Westwood, Los Feliz, Silver Lake, and parts of the San Fernando Valley experienced a mild earthquake—a magnitude 3.6. Most people slept through the temblor and no damage was reported.

But a select group of 150 LA residents got a text alert on their mobile phone a full eight seconds before the quake hit at 11:10 pm—enough time for people to drop, cover, and hold on. Along with a pinned location of quake's epicenter, the text gave its magnitude and intensity, the number of seconds left before the shaking, and instructions on what to do. The system detects an earthquake's up-and-down p-wave, which travels faster and precedes the destructive horizontal s-wave, and converts that signal into a broadcast warning.

Other parts of the world have similar systems—but accessible to a wider population. On Tuesday afternoon, Mexico City sirens blared a few seconds before a magnitude 7.1 earthquake struck the capital, flattening hundreds of buildings and killing at least 200 people. When an 8.1 magnitude quake hit on September 7 off the coast of Mexico, the SASMEX alert system collecting data from sensors along Mexico’s western coast gave residents more than a minute’s warning from sirens and even news reports on radio and TV. A complementary smartphone app is used by millions of Mexicans. And Japan also has a sophisticated earthquake text-alert system, giving tsunami and earthquake warnings to the entire nation.

So why is the US earthquake system stuck in beta mode with only a lucky few getting an earthquake heads-up? The LA residents received their early warning as part of a pilot study conducted by the US Geological Survey and Santa Monica-based Early Warning Labs. But experts say lack of money and bureaucratic inertia has stymied the USGS ShakeAlert warning system, despite a decade of promises and positive trial runs.

The USGS has only installed about 40 percent of the 1,675 sensors it needs to protect seismically vulnerably areas of the West Coast in Los Angeles, the San Francisco Bay Area, and Seattle, says Doug Given, who coordinates the ShakeAlert system at the USGS Pasadena office.
“We still don’t have full funding,” says Given. “We are on a continuing resolution through December 8 and are operating at the level of last year’s budget."

ShakeAlert costs a measly $16 million each year to build and operate, but the USGS has only been given $10 million each year. The Trump administration's proposed budget had zeroed-out the entire ShakeAlert program, but dozens of lawmakers from San Diego to Seattle protested. A House committee blocked the cuts in July, but the final budget document is still awaiting passage.

The promise of ShakeAlert—which goes beyond the smartphone app tested by those LA residents—has already been shown in many ways. The system gives automated early warnings to slow BART trains in the Bay Area and protect California oil and gas refinery operations. ShakeAlert will even automatically put NASA’s deep space telescope in Goldstone, California into a safe mode. A few luxury condo buildings in Marina del Rey, Calif., and Santa Monica College have also purchased a commercial version of the ShakeAlert warning, which piggybacks off the USGS sensors but offers a direct signal to the building that slows elevators inside.

But getting a widespread text alert system up and running for the millions of Californians (and Oregonians and Washingtonians) is a tougher sell. The engineers and scientists working on the project have to be confident there won’t be false alarms that would weaken the warning’s credibility.

    More on Earthquakes

  • Lizzie Wade

    Mexico City’s Earthquake Alert Worked. The Rest of the Country Wasn’t So Lucky

  • Nick Stockton

    Experts Answer Your Biggest Questions About Earthquakes

  • Sarah Zhang

    The Way We Measure Earthquakes Is Stupid

They are also dealing with a bottleneck from US phone companies who haven’t been able to embed the warning signal into existing wireless networks, according to Josh Bashioum, founder and principal investigator of Early Warning Labs. “Unfortunately, the way our telcos are set up, they aren’t fast enough to deliver an early warning,” Bashioum says.

The providers don't have the ability to send an automated text message to the millions of people living in Southern California, for example, that could also override all the other signals that phones are processing at the same time. These texts have to go out in the narrow window between the detection of the p-wave and the arrival of the potentially deadly s-wave, or they aren't any good. Then again, Japanese cell companies have figured it out.

The USGS and Bashiouim have been meeting with the cell providers to push the effort, but Given expects it won’t happen for another three to five years. In the meantime, he hopes to at least get more seismic sensors in the ground so that scientists can alert first responders when a big quake hits. “The closer your [seismic] station is to the earthquake, the quicker you are going to recognize it detect it and send the alert,” Given says. “Given that we don’t know where the earthquake is going to occur, we have to have sensors all over the potential area of coverage.”

Sure, he could put a lot more sensors along the San Andreas fault, which has the highest odds of another quake. But that won't stop other quakes from hitting. For now, residents who live near seismic zones will have to make do with a real-time warning, and hope their building is up to code.

Related Video

Science

Cal Stadium Quake Retrofit

The rift under UC Berkeley's arena has been called a tectonic time bomb. Here's the university's $321 million retrofit plan.

Marathon wisdom told you it was too rainy, too slippery, and too warm for fast times at this morning’s Berlin Marathon, but Eliud Kipchoge refused to be overcome, either by the conditions or by his competitors. He won a race against perhaps the strongest field assembled in the past decade, even after a surprise attack by a debutant marathoner, Guye Adola, threatened to spoil his day. Kipchoge eventually missed the world record by 35 seconds, finishing in 2:03:32—a miraculous time in the circumstances. In both the fact and the manner of his victory, he has laid to rest any debate about who is the best marathon runner of this generation.

Berlin woke up in a cloud. In the forested Tiergarten, where the race starts, it was 57 degrees—significantly too hot for the fastest times—and the air was thick and moist. The official weather forecast said it was 99 percent humidity, but it’s hard to imagine how they missed that final one percent. The air was like soup. Humidity is a problem for elite athletes.

If the atmosphere was thick, so was the sense of expectation. As the three star athletes—Eliud Kipchoge, who ran 2:00:25 in Nike’s Breaking2 experiment earlier this year; Wilson Kipsang, the only man ever to win New York, London, and Berlin; and Kenenisa Bekele, world and Olympic record holder in 5,000 and 10,000 meters, and last year’s Berlin winner—warmed up in front of the start line, they betrayed their states of mind. Bekele looked tight with nerves as he stretched out his arms above his head, while Kipsang and Kipchoge ran some fast sprints and smiled easily to the crowd. Kipsang’s grin cracked briefly when the starter announced his rival, Kipchoge, as “the world’s best marathon runner.”

Thick Air and Slippery Turns

From the start, Kipchoge, wearing a white singlet, black half-tights, and red shoes, tucked in behind the three elite pacers, who had been asked to lead the fastest athletes to halfway in a previously unthinkable split time of 60 minutes and 50 seconds. The rain soon became intense, and it became obvious that nobody was going to run so fast for the first half. Simply turning a corner required care and concentration. Every time the lead pack did so, they slowed considerably. As the rain intensified, Gideon Kipketer, the rangy pacemaker (and Kipchoge’s training partner) screwed his face up into the weather.

The lead pack, which included not just the three big names but the Ethiopian debutant Adola and the Kenyan Vincent Kipruto, made halfway in 61:30, a second or two outside world record pace. In the conditions, it was an excellent split. The weather also started to lift a little, and Kipchoge looked increasingly comfortable.

Bekele, though, was dropped from the lead pack at halfway, unable to live with the pace. He did not finish the race. By 17 miles, only one pacemaker had survived—Sammy Kitwara. He dropped out at the 30-kilometer (18.6 mile) mark, and so—to everyone’s surprise—did Wilson Kipsang, clutching his stomach.

Almost everyone was suffering. Not only was the road slippery, but the athletes’ clothes were sticking to the skin, and—most importantly—all the runners would have found it hard to regulate their temperature. One of the limiting factors in marathon running is an athlete’s ability to dissipate the heat generated while synthesizing the energy needed to run so fast. Mostly, body heat is lost through sweating. But, the thicker and warmer the air, the harder that process becomes.

For the final seven and a half miles, it was Kipchoge, the master, versus Adola, the newcomer. Adola, who is taller and has a scruffier gait, seemed relaxed, and Kipchoge looked actively irritated by the close attention the Ethiopian was paying him. Kipchoge asked Adola more than once to move either in front or behind him. Adola continued as he was, shoulder to shoulder with the senior man. As they jostled, the world record drifted away. At the 35-kilometer (21.7 mile) marker, Kipchoge was around six seconds outside world record pace. But, oddly, it was at this moment that Kipchoge began to smile. Battle was joined.

    More Racing

  • Ed Caesar

    The Blockbuster Showdown At This Year's Berlin Marathon

  • Nicholas Thompson

    Sex, Drugs, and the Inside Lane: Recapping the 2017 World Championships of Track

  • Ed Caesar

    The Epic Untold Story of Nike’s (Almost) Perfect Marathon

Race to the Finish

At around 23 miles, Adola attacked, opening a gap of 10 meters and moving to the other side of the road as if to accentuate the distance between he and Kipchoge. The Kenyan responded, and seemed to be reeling Adola in, but the Ethiopian pressed again. Even as the world record drifted toward impossibility, nobody who was watching the race cared. This was thrilling sport, a true duel. With two miles to go, Kipchoge seemed visibly to muster reserves of energy for a final attempt to break Adola, and at the final drinks station at 40 kilometers (24.8 miles), he caught him, and then blew past him.

Kipchoge finished with a kick. When he crested the line, he looked as happy as a lottery winner. He hugged his coach, Patrick Sang, and saluted the crowd. Sang is not normally given to hyperbole, but his pride, minutes after the race had ended, was uncontainable.

“In these terrible conditions, two-oh-three is amazing,” Sang told me. “There was the mental challenge, the physical challenge, the environmental challenge… He is one of the great runners.”

I’d go a step further. Eliud Kipchoge has never broken the world record, but I’ve now watched four races in which he was in shape to do so—the London Marathon of 2016, which he won in 2:03:05, the Rio Olympic Marathon which he won in 2:08:44, the Breaking2 race at Monza which he won in 2:00:25, and today’s Berlin Marathon. In each case, he would have ripped chunks out of the world record in perfect conditions. But he has either been running on a slow course, or in slow conditions, and the title of world-record holder has evaded him. That’s marathon racing. In this sport, you have to be good and lucky.

Kipchoge may never break the world record now. The years, and the marathons, are piling up. He would never admit this, but it’s possible his chance has come and gone. In the final reckoning, it won’t matter. Nobody who watched Kipchoge win those four races could be in any doubt of his superiority. Today’s race was a reminder not just of his physical talents but of his mental fortitude. World record or no world record, he is the greatest.

Related Video

Science

How Nike Nearly Cracked the Perfect Marathon

Runners have been trying to break through the 2 hour marathon mark for decades. Here's the incredible science behind how Eliud Kipchoge came within 25 seconds in Nike's Breaking2 project.

In the early ’00s, few web endeavors seemed less bound for long-term glory than CollegeHumor.com. The site launched in 1999 as a video and sight-gag repository “dedicated to grinding your academic efforts to a halt.” Early on, that meant lots of bro-friendly distractions, like photos of students passed out on lawns, naughtily titled JPEGs, and video series like “Husky Dave the Fat Guy”. There was enough low-brow, high-bandwidth material on CollegeHumor–and enough users eager to submit their own homemade juvenilia–that, at one point, the site kept a running list of high schools that had banned it from their classrooms.

But in the decades that followed, CollegeHumor’s users aged out of school–and so did the site, which began focusing less on campus hijinks, and more on office-space goofiness and even politics. Along the way, it built up a healthy YouTube following, with the official CollegeHumor channel alone claiming more than 13 million subscribers. And in recent years, following a relocation from New York City to Los Angeles, the company found success with TV shows like truTV’s Adam Ruins Everything. CollegeHumor became one of the web’s few legacy companies, surviving while numerous other web-comedy companies grind to a halt.

Now the long-running company–which has been majority-owned by media heavyweight IAC since 2006–is matriculating into the unpredictable subscription-service realm. Today CollegeHumor announces DROPOUT, a streaming platform that will serve up a mix of original videos, online comics, and chat stories. Available initially as a mobile-web offering, with an introductory price of $3.99 a month, DROPOUT marks a sort of declaration of independence for the company: Thanks to increased restrictions on YouTube, not to mention the audience-friendly demands of network TV, CollegeHumor was experiencing “a little creative repression,” says Sam Reich, the company’s Head of Video. “Now, we get to do whatever we want.”

Related Stories

DROPOUT’s initial slate features more than ten shows, including See Plum Rum, a school-election-themed revival of CollegeHumor’s popular Precious Plum series; the nerd-knowledge game show Um, Actually; and the dating series Lonely & Horny, featuring returning CollegeHumor stars Jake Hurwitz and Amir Blumenfeld. Also in the works is next year’s WTF 101, an animated program featuring a bunch of in-detention teens “learning the most fucked-up things about our world,” says Reich. “It's a show we couldn't do on TV, because it's way too R-rated.”

The hope is that the company’s more grown-up material–not to mention its decades-old fanbase–will help CollegeHumor succeed where several other streaming-service efforts have failed. Last year, the NBC-owned comedy site Seeso–which featured material from Saturday Night Live, as well as original shows like HarmonQuest–folded after less than two years. The Verizon-launched free service Go90, which featured a handful of comedy offerings, closed for good this summer–not long after the millennial-aimed upstart Fullscreen announced it was shutting down.

And at a time when Facebook is serving up an endless stream of personalized comedy videos, getting viewers commit to to a stand-alone service is riskier than ever. “If I can get funny videos on the internet for free, how does somebody like CollegeHumor break through?,” asks James McQuivey, principal analyst at Forrester Research. “The blue-humor angle gives them a way to rise above the noise. And I think that could work–at first.”

The bigger challenge for a service like DROPOUT, McQuivey says, is keeping users around after the initial few months of enthusiasm. “You have to produce original content at a high volume,” he says. “If people are only coming back once or twice a month, they won’t pay for it. They have to come back once or twice a day.”

Reich and his colleagues know long-time fans might balk at the idea of handing over a few bucks each month for DROPOUT. Yet they believe it’s a fair trade-off for CollegeHumor’s newfound freedom. Reich says conversations about an on-demand outlet began in late 2016, after a TV series CollegeHumor had been developing with a big network–Reich is prevented from saying which one–went belly-up. “I was in this vulnerable place,” says Reich. “We’d just done what I thought was the best pilot to ever come through our company, and it was summarily rejected.” Eventually, Reich says, “we all stopped and looked at each other and went, ‘How do we take back more ownership?’”

CollegeHumor isn’t halting its TV efforts: In addition to Adam Ruins Everything, the company produces the series Hot Date for Pop. But DROPOUT allows the company to circumvent the restrictions that are an inevitable part of the development process, as executives have to pay heed to advertisers’ wishes. And it gives CollegeHumor an alternative to YouTube. The company still releases an average of 3-4 new videos to YouTube a week. But recently, Reich says, the platform “has become less and less friendly a place to be even a little bit outrageous.”

That’s caused problems for some of CollegeHumor’s videos from the past year, including “Our Weirdest Sex Misconceptions” or “CH Does The Purge”–both of which were flagged by the service as being inappropriate for some viewers. Such restrictions make it harder for CollegeHumor to get those clips in front of viewers. According to Reich, YouTube’s algorithm “sometimes interprets a ‘comedy video about sex as being a ‘sex video.’” CollegeHumor can contest the ruling, but they don’t always wind up winning.

It’s not just YouTube’s recent crackdowns that have been a turn-off. Reich says the platform was never much of a money-maker for CollegeHumor. And for comedy creators, YouTube is hardly the eyeball-jackpot it used to be: Even four years ago, a CollegeHumor hit like “If Google Was a Guy” could go on to earn more than 40 million views–a number that seems impossible for any comedy sketch in 2019. “These days,” says Reich, “if a video gets over a million views, we consider that a hit.”

Ultimately, DROPOUT represents way for CollegeHumor to move toward a less YouTube-tied future-as well as an attempt to recapture the lawlessness of the web’s not-so-distant digital past. “It’s not the frattiness we’re trying to get back,” says Reich, who’s been with the company since 2006. “But ten years ago, the internet used to be a haven for creative experimentation.” To get that back, “we needed to create our own platform, so we aren't dependent on anyone else.” Just the people willing to pay for yet another subscription service.

The Future of Work: The Branch, by Eugene Lim

March 20, 2019 | Story | No Comments

“A library of the future might also be, at its best, a sanctuary where we are encouraged to spend entire hours looking at just one thing.” —Michael Agresta, “What Will Become of the Library?” Slate (2014)

The library of the future is more or less the same. That is, the branch is an actual and metaphoric Faraday cage. You enter, a node and a target, streamed at and pushed and yanked, penetrated by and extruding information, sloppy with it. And then your implants are cut off. Your watch, your glasses, jacket, underwear, your lenses, tablet, chips, your nanos—all go dry.

You’ve come to the library as usual out of desperation, yearning, boredom. There is a heart of uncertainty in your life, and you might wish to ask the library any number of questions: Should you take this job or that one? Won’t you ever get out of debt? Will he ever love you? Does she love you enough? Enough to leave her wife? Why, after all this time, did he show up again? Why can’t I sleep? I think my kid thinks I’m stupid. Why do I sleep so much? Why oh why am I so fucked up?

The librarian sits in a wooden chair, dressed in starched, sharply pressed clothing, muted colors. Today it’s the skinny dapper dude. You slightly prefer him to the short hairy man, but above all you like the zaftig disheveled woman—though, in fact, they are all remarkably similar: efficient, a sad vulnerability offset by an almost smug confidence in their training and knowledge, impersonal yet generous. These librarians of the future.

Eight sci-fi writers imagine the bold future of work.

Since this isn’t your first visit to the branch—you’re a regular—you can skip the usual orientations: the ritual data entry of blood type and genome sequence, the small pendulum and cutting of card deck, the opening up of palm and the tossing of yarrow. Those kinds of biometrics are for the newfangled anyway. Most of the time, here, it’s the more traditional talk therapy. What brings you in today? How did that make you feel? What were they like? Pretend she’s sitting in this chair.

“I got a weird call from my sister,” you say. “Her son is developing an eating disorder, and I wanted to tell her it’s because our mother was a monster and you’re becoming exactly the same … I never felt comfortable enough in my own skin … Always trying to please them, to please everyone, get them to like me … After we hung up, I wanted to eat the phone I was so mad …”

The librarian listens and prods and nods. Near the end, before you both rise, he repeats the usual admonitions, prayers, and liturgy. He says, “The infinite library, which is outside the library, is not the library. The world is everything that is the case. Relieve me of the bondage of self. The true library is human error, metonym, forgetting. To study the self is to forget the self. The library is not the map and is not the territory; the library is the map and the library is the territory. The empire never ended. It’s a small world after all …” You get tired of the mumbo jumbo but nonetheless respect the ritual.

SIGN UP TODAY

Sign up for the Daily newsletter and never miss the best of WIRED.

You finish the advisory interview with a tour. He takes your arm and guides you around the stacks. He points out a new Japanese crime novel, a recently published translation of a Uruguayan rapper’s lyrics, and a popular cookbook of Basque cuisine. As always, he says—before disappearing to his next appointment—the most important thing is to take the time to browse.

You do and find a new series of yaoi manga and a trashy history of the Russian Revolution. In an overstuffed leather armchair, you spend a few hours reading the Uruguayan rapper’s compositions. They are startling, and they articulate for you dense intergenerational griefs you hadn’t before known you’d been carrying. Looking up, you realize the afternoon is nearly over. You put the books in a bag and feel their promising weight. The clouded, unbodied versions of these are out there, weightless, in the infinite library, but you came here to have these minds manifested in the physical; virtual reality machines made out of printed voice; handheld AI instantiated by paper, cardboard, and reader response.

Your steps out of the library are careless with serenity. Then you exit the building and so are instantly hit with the packages, whoops, and floods. You recall and repeat the librarian’s words: The infinite library is not the library. The infinite library is not the true library. The true library is human error, metonym, forgetting. The infinite library, which is outside the library, is not the library. The true library is incomplete.


Eugene Lim (@lim_eugene) is the author, most recently, of Dear Cyborgs, and works as a high school librarian.

This article is part of The Future of Work from the January issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at [email protected].


  • Introduction: What'll We Do?
  • Real Girls by Laurie Penny
  • The Trustless by Ken Liu
  • Placebo by Charles Yu
  • The Farm by Charlie Jane Anders
  • The Third Petal by Nisi Shawl
  • Maximum Outflow by Adam Rogers
  • Compulsory by Martha Wells

Related Video

Business

Robots & Us: A Brief History of Our Robotic Future

Artificial intelligence and automation stand to upend nearly every aspect of modern life, from transportation to health care and even work. So how did we get here and where are we going?

When someone takes their own life, they leave behind an inheritance of unanswered questions. “Why did they do it?” “Why didn’t we see this coming?” “Why didn’t I help them sooner?” If suicide were easy to diagnose from the outside, it wouldn’t be the public health curse it is today. In 2014 suicide rates surged to a 30-year high in the US, making it now the second leading cause of death among young adults. But what if you could get inside someone’s head, to see when dark thoughts might turn to action?

That’s what scientists are now attempting to do with the help of brain scans and artificial intelligence. In a study published today in Nature Human Behavior, researchers at Carnegie Mellon and the University of Pittsburgh analyzed how suicidal individuals think and feel differently about life and death, by looking at patterns of how their brains light up in an fMRI machine. Then they trained a machine learning algorithm to isolate those signals—a frontal lobe flare at the mention of the word “death,” for example. The computational classifier was able to pick out the suicidal ideators with more than 90 percent accuracy. Furthermore, it was able to distinguish people who had actually attempted self-harm from those who had only thought about it.

Thing is, fMRI studies like this suffer from some well-known shortcomings. The study had a small sample size—34 subjects—so while the algorithm might excel at spotting particular blobs in this set of brains, it’s not obvious it would work as well in a broader population. Another dilemma that bedevils fMRI studies: Just because two things occur at the same time doesn’t prove one causes the other. And then there’s the whole taint of tautology to worry about; scientists decide certain parts of the brain do certain things, then when they observe a hand-picked set of triggers lighting them up, boom, confirmation.

In today’s study, the researchers started with 17 young adults between the ages of 18 and 30 who had recently reported suicidal ideation to their therapists. Then they recruited 17 neurotypical control participants and put them each inside an fMRI scanner. While inside the tube, subjects saw a random series of 30 words. Ten were generally positive, 10 were generally negative, and 10 were specifically associated with death and suicide. Then researchers asked the subjects to think about each word for three seconds as it showed up on a screen in front of them. “What does ‘trouble’ mean for you?” “What about ‘carefree,’ what’s the key concept there?” For each word, the researchers recorded the subjects' cerebral blood flow to find out which parts of their brains seemed to be at work.

Then they took those brain scans and fed them to a machine learning classifier. For each word, they told the algorithm which scans belonged to the suicidal ideators and which belonged to the control group, leaving one person at random out of the training set. Once it got good at telling the two apart, they gave it the left-out person. They did this for all 30 words, each time excluding one test subject. At the end, the classifier could reliably look at a scan and say whether or not that person had thought about killing themselves 91 percent of the time. To see how well it could more generally parse people, they then turned it on 21 additional suicidal ideators, who had been excluded from the main analyses because their brain scans had been too messy. Using the six most discriminating concepts—death, cruelty, trouble, carefree, good, and praise—the classifier spotted the ones who’d thought about suicide 87 percent of the time.

“The fact that it still performed well with noisier data tells us that the model is more broadly generalizable,” says Marcel Just, a psychologist at Carnegie Mellon and lead author on the paper. But he says the approach needs more testing to determine if it could successfully monitor or predict future suicide attempts. Comparing groups of individuals with and without suicide risk isn’t the same thing as holding up a brain scan and assigning its owner a likelihood of going through with it.

But that’s where this is all headed. Right now, the only way doctors can know if a patient is thinking of harming themselves is if they report it to a therapist, and many don’t. In a study of people who committed suicide either in the hospital or immediately following discharge, nearly 80 percent denied thinking about it to the last mental healthcare professional they saw. So there is a real need for better predictive tools. And a real opportunity for AI to fill that void. But probably not with fMRI data.

    More on Mental Health

  • Megan Molteni

    Artificial Intelligence Is Learning to Predict and Prevent Suicide

  • Megan Molteni

    The Chatbot Therapist Will See You Now

  • Robbie Gonzalez

    Virtual Therapists Help Veterans Open Up About PTSD

It’s just not practical. The scans can cost a few thousand dollars, and insurers only cover them if there is a valid clinical reason to do so. That is, if a doctor thinks the only way to diagnose what’s wrong with you is to stick you in a giant magnet. While plenty of neuroscience papers make use of fMRI, in the clinic, the imaging procedure is reserved for very rare cases. Most hospitals aren’t equipped with the machinery, for that very reason. Which is why Just is planning to replicate the study—but with patients wearing electronic sensors on their head while they're in the tube. Electroencephalograms, or EEGs, are one hundredth the price of fMRI equipment. The idea is to tie predictive brain scan signals to corresponding EEG readouts, so that doctors can use the much cheaper test to identify high-risk patients.

Other scientists are already mining more accessible kinds of data to find telltale signatures of impending suicide. Researchers at Florida State and Vanderbilt recently trained a machine learning algorithm on 3,250 electronic medical records for people who had attempted suicide sometime in the last 20 years. It identifies people not by their brain activity patterns, but by things like age, sex, prescriptions, and medical history. And it correctly predicts future suicide attempts about 85 percent of the time.

“As a practicing doctor, none of those things on their own might pop out to me, but the computer can spot which combinations of features are predictive of suicide risk,” says Colin Walsh, an internist and clinical informatician at Vanderbilt who’s working to turn the algorithm he helped develop into a monitoring tool doctors and other healthcare professionals in Nashville can use to keep tabs on patients. “To actually get used it’s got to revolve around data that’s already routinely collected. No new tests. No new imaging studies. We’re looking at medical records because that’s where so much medical care is already delivered.”

And others are mining data even further upstream. Public health researchers are poring over Google searches for evidence of upticks in suicidal ideation. Facebook is scanning users’ wall posts and live videos for combinations of words that suggest a risk of self-harm. The VA is currently piloting an app that passively picks up vocal cues that can signal depression and mood swings. Verily is looking for similar biomarkers in smart watches and blood draws. The goal for all these efforts is to reach people where they are—on the internet and social media—instead of waiting for them to walk through a hospital door or hop in an fMRI tube.

Related Video

Technology

The Robot Will See You Now – AI and Health Care

Artificial intelligence is now detecting cancer and robots are doing nursing tasks. But are there risks to handing over elements of our health to machines, no matter how sophisticated?

You want the real windows into someone's soul? Look at their Reddit subscriptions. It's all there: their passions, their hobbies, their ideological leanings, their love of terrible haircuts and sublime anonymized cringe. And if they're anything like me, those subscriptions also tell the tale of a life spent diving down rabbit holes.

Origami. Board games. Trail running. Pens. Cycling. Mechanical keyboards. Scrabble. (I know. God, I know. There are jokes to be made here. Trust that I've already made them all myself.) Whenever my interest attaches itself to a new thing—which has happened my entire life, cyclically and all-encompassingly—I tend to develop a singular, insatiable appetite for information about that thing. Hey, you know what the internet is really good at? Enabling singular, insatiable appetites.

Especially since 2005. That's the year Reddit and YouTube launched within months of each other, and obsession became centralized. You had options before that, blogs and message boards and Usenet forums, but they weren't exactly magnets of cross-pollination. They didn't fully open the floodgates to minute details and the masses yearning to pore over them. Then, on opposite sides of the country, two different small groups of twentysomething dudes created twin engines of infatuation. Between their massive tents and their ease of use, Reddit and YouTube tore away the guardrail that had always stood between serial hobbyists and oblivion.

Related Stories

For all the hand-wringing about both sites—YouTube's gameable recommendation algorithm that can radicalize dummies at the drop of a meme, Reddit's chelonian foot speed when dealing with bad actors and hate speech in the more noisome subreddits—both are incredible resources for the participatory realm. Watching more experienced people do what you're trying to do, sharing setups and techniques, even getting support and commiseration from those who are similarly, rapturously afloat in the same thing you can't stop reading and thinking about: It's not just a recipe for intellectual indulgence, but for improvement as well. (On YouTube, that value comes from the creator; on Reddit, it comes from the comments. Swap the two at your own peril.)

Rabbit holes are what make Beauty YouTube such a colossus, why the Ask Science subreddit has 16 million subscribers. But they also hold a secret: The deeper you go, the tighter it gets. That's because a rabbit hole is a filter bubble of sorts, albeit one that's labeled as such and explicitly opted into—you're there because you're interested in this Thing, as is everyone else, and under such celebratory scrutiny that Thing distends, its perceived stature far outweighing its real-life impact. Just because there are a million opinions about something doesn't make it important to anyone outside the bubble, let alone crucial.

And before long, orthodoxy rears its head. Want to make coffee? Oh, you're going to need to spend hours dialing in the grind on your $1,000 Mazzer Mini E before pouring 205-degree water over it from your gooseneck kettle. Don't forget to account for the bloom! Want to get a new keyboard that feels better and looks nicer than your laptop's? Great, but Topre switches or GTFO. Oh, and don't stop at one. Or two. Or 17.

Don't get me wrong. I'm a collector. I love the right tool for the right job, and I love research even more. (I'm really fucking weird about my pens.) But more than once I've become consumed by the idea that my experience with a Thing will be utterly transformed if I just treat myself to the right running vest. Or digital temperature regulator for an espresso machine. Or, yes, Scrabble-themed keycaps. That's not the joy of collecting; it's the expectation of fulfillment. I watch video reviews, or read people waxing rhapsodic, and it changes my Thing from a learning process, an intrinsic enjoyment, to a preamble. There's an "endgame"; there are "grails." Get the grail, and you're in the endgame.

But there's no endgame, and there's no grail. There's no bottom to the rabbit hole.

What there is is learning more about a thing you like to do, and maybe getting better at it. Running longer. Enjoying the feel of your pen on paper. Playing a game with friends. Everything else is just a commercial. So jump into all the rabbit holes you want—just don't expect to find Wonderland.


How We Learn: Read More

Related Video

Culture

Inside the YouTube-Fueled, Teenage Extravaganza That Is Beautycon

A look at the industry-shaking in real life meet up of beauty world influencers, their fans and the brands that compete for their attention.

Scientists have been using quantum theory for almost a century now, but embarrassingly they still don’t know what it means. An informal poll taken at a 2011 conference on Quantum Physics and the Nature of Reality showed that there’s still no consensus on what quantum theory says about reality—the participants remained deeply divided about how the theory should be interpreted.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Some physicists just shrug and say we have to live with the fact that quantum mechanics is weird. So particles can be in two places at once, or communicate instantaneously over vast distances? Get over it. After all, the theory works fine. If you want to calculate what experiments will reveal about subatomic particles, atoms, molecules and light, then quantum mechanics succeeds brilliantly.

But some researchers want to dig deeper. They want to know why quantum mechanics has the form it does, and they are engaged in an ambitious program to find out. It is called quantum reconstruction, and it amounts to trying to rebuild the theory from scratch based on a few simple principles.

If these efforts succeed, it’s possible that all the apparent oddness and confusion of quantum mechanics will melt away, and we will finally grasp what the theory has been trying to tell us. “For me, the ultimate goal is to prove that quantum theory is the only theory where our imperfect experiences allow us to build an ideal picture of the world,” said Giulio Chiribella, a theoretical physicist at the University of Hong Kong.

There’s no guarantee of success—no assurance that quantum mechanics really does have something plain and simple at its heart, rather than the abstruse collection of mathematical concepts used today. But even if quantum reconstruction efforts don’t pan out, they might point the way to an equally tantalizing goal: getting beyond quantum mechanics itself to a still deeper theory. “I think it might help us move towards a theory of quantum gravity,” said Lucien Hardy, a theoretical physicist at the Perimeter Institute for Theoretical Physics in Waterloo, Canada.

The Flimsy Foundations of Quantum Mechanics

The basic premise of the quantum reconstruction game is summed up by the joke about the driver who, lost in rural Ireland, asks a passer-by how to get to Dublin. “I wouldn’t start from here,” comes the reply.

Where, in quantum mechanics, is “here”? The theory arose out of attempts to understand how atoms and molecules interact with light and other radiation, phenomena that classical physics couldn’t explain. Quantum theory was empirically motivated, and its rules were simply ones that seemed to fit what was observed. It uses mathematical formulas that, while tried and trusted, were essentially pulled out of a hat by the pioneers of the theory in the early 20th century.

Take Erwin Schrödinger’s equation for calculating the probabilistic properties of quantum particles. The particle is described by a “wave function” that encodes all we can know about it. It’s basically a wavelike mathematical expression, reflecting the well-known fact that quantum particles can sometimes seem to behave like waves. Want to know the probability that the particle will be observed in a particular place? Just calculate the square of the wave function (or, to be exact, a slightly more complicated mathematical term), and from that you can deduce how likely you are to detect the particle there. The probability of measuring some of its other observable properties can be found by, crudely speaking, applying a mathematical function called an operator to the wave function.

But this so-called rule for calculating probabilities was really just an intuitive guess by the German physicist Max Born. So was Schrödinger’s equation itself. Neither was supported by rigorous derivation. Quantum mechanics seems largely built of arbitrary rules like this, some of them—such as the mathematical properties of operators that correspond to observable properties of the system—rather arcane. It’s a complex framework, but it’s also an ad hoc patchwork, lacking any obvious physical interpretation or justification.

Compare this with the ground rules, or axioms, of Einstein’s theory of special relativity, which was as revolutionary in its way as quantum mechanics. (Einstein launched them both, rather miraculously, in 1905.) Before Einstein, there was an untidy collection of equations to describe how light behaves from the point of view of a moving observer. Einstein dispelled the mathematical fog with two simple and intuitive principles: that the speed of light is constant, and that the laws of physics are the same for two observers moving at constant speed relative to one another. Grant these basic principles, and the rest of the theory follows. Not only are the axioms simple, but we can see at once what they mean in physical terms.

What are the analogous statements for quantum mechanics? The eminent physicist John Wheeler once asserted that if we really understood the central point of quantum theory, we would be able to state it in one simple sentence that anyone could understand. If such a statement exists, some quantum reconstructionists suspect that we’ll find it only by rebuilding quantum theory from scratch: by tearing up the work of Bohr, Heisenberg and Schrödinger and starting again.

Quantum Roulette

One of the first efforts at quantum reconstruction was made in 2001 by Hardy, then at the University of Oxford. He ignored everything that we typically associate with quantum mechanics, such as quantum jumps, wave-particle duality and uncertainty. Instead, Hardy focused on probability: specifically, the probabilities that relate the possible states of a system with the chance of observing each state in a measurement. Hardy found that these bare bones were enough to get all that familiar quantum stuff back again.

Hardy assumed that any system can be described by some list of properties and their possible values. For example, in the case of a tossed coin, the salient values might be whether it comes up heads or tails. Then he considered the possibilities for measuring those values definitively in a single observation. You might think any distinct state of any system can always be reliably distinguished (at least in principle) by a measurement or observation. And that’s true for objects in classical physics.

In quantum mechanics, however, a particle can exist not just in distinct states, like the heads and tails of a coin, but in a so-called superposition—roughly speaking, a combination of those states. In other words, a quantum bit, or qubit, can be not just in the binary state of 0 or 1, but in a superposition of the two.

But if you make a measurement of that qubit, you’ll only ever get a result of 1 or 0. That is the mystery of quantum mechanics, often referred to as the collapse of the wave function: Measurements elicit only one of the possible outcomes. To put it another way, a quantum object commonly has more options for measurements encoded in the wave function than can be seen in practice.

Hardy’s rules governing possible states and their relationship to measurement outcomes acknowledged this property of quantum bits. In essence the rules were (probabilistic) ones about how systems can carry information and how they can be combined and interconverted.

Hardy then showed that the simplest possible theory to describe such systems is quantum mechanics, with all its characteristic phenomena such as wavelike interference and entanglement, in which the properties of different objects become interdependent. “Hardy’s 2001 paper was the ‘Yes, we can!’ moment of the reconstruction program,” Chiribella said. “It told us that in some way or another we can get to a reconstruction of quantum theory.”

More specifically, it implied that the core trait of quantum theory is that it is inherently probabilistic. “Quantum theory can be seen as a generalized probability theory, an abstract thing that can be studied detached from its application to physics,” Chiribella said. This approach doesn’t address any underlying physics at all, but just considers how outputs are related to inputs: what we can measure given how a state is prepared (a so-called operational perspective). “What the physical system is is not specified and plays no role in the results,” Chiribella said. These generalized probability theories are “pure syntax,” he added — they relate states and measurements, just as linguistic syntax relates categories of words, without regard to what the words mean. In other words, Chiribella explained, generalized probability theories “are the syntax of physical theories, once we strip them of the semantics.”

The general idea for all approaches in quantum reconstruction, then, is to start by listing the probabilities that a user of the theory assigns to each of the possible outcomes of all the measurements the user can perform on a system. That list is the “state of the system.” The only other ingredients are the ways in which states can be transformed into one another, and the probability of the outputs given certain inputs. This operational approach to reconstruction “doesn’t assume space-time or causality or anything, only a distinction between these two types of data,” said Alexei Grinbaum, a philosopher of physics at the CEA Saclay in France.

To distinguish quantum theory from a generalized probability theory, you need specific kinds of constraints on the probabilities and possible outcomes of measurement. But those constraints aren’t unique. So lots of possible theories of probability look quantum-like. How then do you pick out the right one?

“We can look for probabilistic theories that are similar to quantum theory but differ in specific aspects,” said Matthias Kleinmann, a theoretical physicist at the University of the Basque Country in Bilbao, Spain. If you can then find postulates that select quantum mechanics specifically, he explained, you can “drop or weaken some of them and work out mathematically what other theories appear as solutions.” Such exploration of what lies beyond quantum mechanics is not just academic doodling, for it’s possible—indeed, likely—that quantum mechanics is itself just an approximation of a deeper theory. That theory might emerge, as quantum theory did from classical physics, from violations in quantum theory that appear if we push it hard enough.

Bits and Pieces

Some researchers suspect that ultimately the axioms of a quantum reconstruction will be about information: what can and can’t be done with it. One such derivation of quantum theory based on axioms about information was proposed in 2010 by Chiribella, then working at the Perimeter Institute, and his collaborators Giacomo Mauro D’Ariano and Paolo Perinotti of the University of Pavia in Italy. “Loosely speaking,” explained Jacques Pienaar, a theoretical physicist at the University of Vienna, “their principles state that information should be localized in space and time, that systems should be able to encode information about each other, and that every process should in principle be reversible, so that information is conserved.” (In irreversible processes, by contrast, information is typically lost—just as it is when you erase a file on your hard drive.)

What’s more, said Pienaar, these axioms can all be explained using ordinary language. “They all pertain directly to the elements of human experience, namely, what real experimenters ought to be able to do with the systems in their laboratories,” he said. “And they all seem quite reasonable, so that it is easy to accept their truth.” Chiribella and his colleagues showed that a system governed by these rules shows all the familiar quantum behaviors, such as superposition and entanglement.

One challenge is to decide what should be designated an axiom and what physicists should try to derive from the axioms. Take the quantum no-cloning rule, which is another of the principles that naturally arises from Chiribella’s reconstruction. One of the deep findings of modern quantum theory, this principle states that it is impossible to make a duplicate of an arbitrary, unknown quantum state.

It sounds like a technicality (albeit a highly inconvenient one for scientists and mathematicians seeking to design quantum computers). But in an effort in 2002 to derive quantum mechanics from rules about what is permitted with quantum information, Jeffrey Bub of the University of Maryland and his colleagues Rob Clifton of the University of Pittsburgh and Hans Halvorson of Princeton University made no-cloning one of three fundamental axioms. One of the others was a straightforward consequence of special relativity: You can’t transmit information between two objects more quickly than the speed of light by making a measurement on one of the objects. The third axiom was harder to state, but it also crops up as a constraint on quantum information technology. In essence, it limits how securely a bit of information can be exchanged without being tampered with: The rule is a prohibition on what is called “unconditionally secure bit commitment.”

These axioms seem to relate to the practicalities of managing quantum information. But if we consider them instead to be fundamental, and if we additionally assume that the algebra of quantum theory has a property called non-commutation, meaning that the order in which you do calculations matters (in contrast to the multiplication of two numbers, which can be done in any order), Clifton, Bub and Halvorson have shown that these rules too give rise to superposition, entanglement, uncertainty, nonlocality and so on: the core phenomena of quantum theory.

Another information-focused reconstruction was suggested in 2009 by Borivoje Dakić and Časlav Brukner, physicists at the University of Vienna. They proposed three “reasonable axioms” having to do with information capacity: that the most elementary component of all systems can carry no more than one bit of information, that the state of a composite system made up of subsystems is completely determined by measurements on its subsystems, and that you can convert any “pure” state to another and back again (like flipping a coin between heads and tails).

Dakić and Brukner showed that these assumptions lead inevitably to classical and quantum-style probability, and to no other kinds. What’s more, if you modify axiom three to say that states get converted continuously—little by little, rather than in one big jump—you get only quantum theory, not classical. (Yes, it really is that way round, contrary to what the “quantum jump” idea would have you expect—you can interconvert states of quantum spins by rotating their orientation smoothly, but you can’t gradually convert a classical heads to a tails.) “If we don’t have continuity, then we don’t have quantum theory,” Grinbaum said.

A further approach in the spirit of quantum reconstruction is called quantum Bayesianism, or QBism. Devised by Carlton Caves, Christopher Fuchs and Rüdiger Schack in the early 2000s, it takes the provocative position that the mathematical machinery of quantum mechanics has nothing to do with the way the world really is; rather, it is just the appropriate framework that lets us develop expectations and beliefs about the outcomes of our interventions. It takes its cue from the Bayesian approach to classical probability developed in the 18th century, in which probabilities stem from personal beliefs rather than observed frequencies. In QBism, quantum probabilities calculated by the Born rule don’t tell us what we’ll measure, but only what we should rationally expect to measure.

In this view, the world isn’t bound by rules—or at least, not by quantum rules. Indeed, there may be no fundamental laws governing the way particles interact; instead, laws emerge at the scale of our observations. This possibility was considered by John Wheeler, who dubbed the scenario Law Without Law. It would mean that “quantum theory is merely a tool to make comprehensible a lawless slicing-up of nature,” said Adán Cabello, a physicist at the University of Seville. Can we derive quantum theory from these premises alone?

“At first sight, it seems impossible,” Cabello admitted—the ingredients seem far too thin, not to mention arbitrary and alien to the usual assumptions of science. “But what if we manage to do it?” he asked. “Shouldn’t this shock anyone who thinks of quantum theory as an expression of properties of nature?”

Making Space for Gravity

In Hardy’s view, quantum reconstructions have been almost too successful, in one sense: Various sets of axioms all give rise to the basic structure of quantum mechanics. “We have these different sets of axioms, but when you look at them, you can see the connections between them,” he said. “They all seem reasonably good and are in a formal sense equivalent because they all give you quantum theory.” And that’s not quite what he’d hoped for. “When I started on this, what I wanted to see was two or so obvious, compelling axioms that would give you quantum theory and which no one would argue with.”

So how do we choose between the options available? “My suspicion now is that there is still a deeper level to go to in understanding quantum theory,” Hardy said. And he hopes that this deeper level will point beyond quantum theory, to the elusive goal of a quantum theory of gravity. “That’s the next step,” he said. Several researchers working on reconstructions now hope that its axiomatic approach will help us see how to pose quantum theory in a way that forges a connection with the modern theory of gravitation—Einstein’s general relativity.

Look at the Schrödinger equation and you will find no clues about how to take that step. But quantum reconstructions with an “informational” flavor speak about how information-carrying systems can affect one another, a framework of causation that hints at a link to the space-time picture of general relativity. Causation imposes chronological ordering: An effect can’t precede its cause. But Hardy suspects that the axioms we need to build quantum theory will be ones that embrace a lack of definite causal structure—no unique time-ordering of events—which he says is what we should expect when quantum theory is combined with general relativity. “I’d like to see axioms that are as causally neutral as possible, because they’d be better candidates as axioms that come from quantum gravity,” he said.

Hardy first suggested that quantum-gravitational systems might show indefinite causal structure in 2007. And in fact only quantum mechanics can display that. While working on quantum reconstructions, Chiribella was inspired to propose an experiment to create causal superpositions of quantum systems, in which there is no definite series of cause-and-effect events. This experiment has now been carried out by Philip Walther’s lab at the University of Vienna—and it might incidentally point to a way of making quantum computing more efficient.

“I find this a striking illustration of the usefulness of the reconstruction approach,” Chiribella said. “Capturing quantum theory with axioms is not just an intellectual exercise. We want the axioms to do something useful for us—to help us reason about quantum theory, invent new communication protocols and new algorithms for quantum computers, and to be a guide for the formulation of new physics.”

But can quantum reconstructions also help us understand the “meaning” of quantum mechanics? Hardy doubts that these efforts can resolve arguments about interpretation—whether we need many worlds or just one, for example. After all, precisely because the reconstructionist program is inherently “operational,” meaning that it focuses on the “user experience”—probabilities about what we measure—it may never speak about the “underlying reality” that creates those probabilities.

“When I went into this approach, I hoped it would help to resolve these interpretational problems,” Hardy admitted. “But I would say it hasn’t.” Cabello agrees. “One can argue that previous reconstructions failed to make quantum theory less puzzling or to explain where quantum theory comes from,” he said. “All of them seem to miss the mark for an ultimate understanding of the theory.” But he remains optimistic: “I still think that the right approach will dissolve the problems and we will understand the theory.”

Maybe, Hardy said, these challenges stem from the fact that the more fundamental description of reality is rooted in that still undiscovered theory of quantum gravity. “Perhaps when we finally get our hands on quantum gravity, the interpretation will suggest itself,” he said. “Or it might be worse!”

    More Quanta

  • Megan Molteni

    Harvey Evacuees Leave Their Belongings—and Health Records—Behind

  • Natalie Wolchover

    The Man Who's Trying to Kill Dark Matter

  • Frank Wilczek

    Your Simple (Yes, Simple) Guide to Quantum Entanglement

Right now, quantum reconstruction has few adherents—which pleases Hardy, as it means that it’s still a relatively tranquil field. But if it makes serious inroads into quantum gravity, that will surely change. In the 2011 poll, about a quarter of the respondents felt that quantum reconstructions will lead to a new, deeper theory. A one-in-four chance certainly seems worth a shot.

Grinbaum thinks that the task of building the whole of quantum theory from scratch with a handful of axioms may ultimately be unsuccessful. “I’m now very pessimistic about complete reconstructions,” he said. But, he suggested, why not try to do it piece by piece instead—to just reconstruct particular aspects, such as nonlocality or causality? “Why would one try to reconstruct the entire edifice of quantum theory if we know that it’s made of different bricks?” he asked. “Reconstruct the bricks first. Maybe remove some and look at what kind of new theory may emerge.”

“I think quantum theory as we know it will not stand,” Grinbaum said. “Which of its feet of clay will break first is what reconstructions are trying to explore.” He thinks that, as this daunting task proceeds, some of the most vexing and vague issues in standard quantum theory—such as the process of measurement and the role of the observer—will disappear, and we’ll see that the real challenges are elsewhere. “What is needed is new mathematics that will render these notions scientific,” he said. Then, perhaps, we’ll understand what we’ve been arguing about for so long.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Related Video

Business

What the What Is Quantum Computing? We've Got You Covered

Thanks to the superposition principle, a quantum machine has the potential to become an exponentially more powerful computer. If that makes little sense to you, here's quantum computing explained.

Tech companies are eyeing the next frontier: the human face. Should you desire, you can now superimpose any variety of animal snouts onto a video of yourself in real time. If you choose to hemorrhage money on the new iPhone X, you can unlock your smartphone with a glance. At a KFC location in Hangzhou, China, you can even pay for a chicken sandwich by smiling at a camera. And at least one in four police departments in the US have access to facial recognition software to help them identify suspects.

But the tech isn’t perfect. Your iPhone X might not always unlock; a cop might arrest the wrong person. In order for software to always recognize your face as you, an entire sequence of algorithms has to work. First, the software has to be able to determine whether an image has a face in it at all. If you’re a cop trying to find a missing kid in a photo of a crowd, you might want the software to sort the faces by age. And ultimately, you need an algorithm that can compare each face with another photo in a database, perhaps with different lighting and at a different angle, and determine whether they’re the same person.

To improve these algorithms, researchers have found themselves using the tools of pollsters and social scientists: demographics. When they teach face recognition software about race, gender, and age, it can often perform certain tasks better. “This is not a surprising result,” says biometrics researcher Anil Jain of Michigan State University, “that if you model subpopulations separately you’ll get better results.” With better algorithms, maybe that cop won’t arrest the wrong person. Great news for everybody, right?

It’s not so simple. Demographic data may contribute to algorithms’ accuracy, but it also complicates their use.

Take a recent example. Researchers based at the University of Surrey in the UK and Jiangnan University in China were trying to improve an algorithm used in specific facial recognition applications. The algorithm, based on something called a 3-D morphable model, digitally converts a selfie into a 3-D head in less than a second. Model in hand, you can use it rotate the angle of someone’s selfie, for example, to compare it to another photograph. The iPhone X and Snapchat use similar 3-D models.

The researchers gave their algorithm some basic instructions: Here’s a template of a head, and here’s the ability to stretch or compress it to get the 2-D image to drape over it as smoothly as possible. The template they used is essentially the average human face—average nose length, average pupil distance, average cheek diameter, calculated from 3-D scans they took of real people. When people made these models in the past, it was hard to collect a lot of scans because they’re time-consuming. So frequently, they’d just lump all their data together and calculate an average face, regardless of race, gender, or age.

The group used a database of 942 faces—3-D scans collected in the UK and in China—to make their template. But instead of calculating the average of all 942 faces at once, they categorized the face data by race. They made separate templates for each race—an average Asian face, white face, and black face, and based their algorithm on these three templates. And even though they had only 10 scans of black faces—they had 100 white faces and over 800 Asian faces—they found that their algorithm generated a 3-D model that matched a real person’s head better than the previous one-template model.

“It’s not only for race,” says computer scientist Zhenhua Feng of the University of Surrey. “If you have a model for an infant, you can construct an infant’s 3-D face better. If you have a model for an old person, you can construct that type of 3-D face better.” So if you teach biometric software explicitly about social categories, it does a better job.

Feng’s particular 3-D models are a niche algorithm in facial recognition, says Jain—the trendy algorithms right now use 2-D photos because 3-D face data is hard to work with. But other more widespread techniques also lump people into categories to improve their performance. A more common 3-D face model, known as a person-specific model, also often uses face templates. Depending on whether the person in the picture is a man, woman, infant, or an elderly person, the algorithm will start with a different template. For specific 2-D machine learning algorithms that verify that two photographs contain the same person, researchers have demonstrated that if you break down different appearance attributes—gender, race, but also eye color, expression—it will also perform more accurately.

    More on Bias in AI

  • Scott Rosenberg

    Why AI Is Still Waiting For Its Ethics Transplant

  • Sophia Chen

    AI Research Is in Desperate Need of an Ethical Watchdog

  • Megan Garcia

    How to Keep Your AI From Turning Into a Racist Monster

So if you teach an algorithm about race, does that make it racist? Not necessarily, says sociologist Alondra Nelson of Columbia University, who studies the ethics of new technologies. Social scientists categorize data using demographic information all the time, in response to how society has already structured itself. For example, sociologists often analyze behaviors along gender or racial lines. “We live in a world that uses race for everything,” says Nelson. “I don’t understand the argument that we’re not supposed to here.” Existing databases—the FBI’s face depository, and the census—already stick people in predetermined boxes, so if you want an algorithm to work with these databases, you’ll have to use those categories.

However, Nelson points out, it’s important that computer scientists think through why they’ve chosen to use race over other categories. It’s possible that other variables with less potential for discrimination or bias would be just as effective.“Would it be OK to pick categories like, blue eyes, brown eyes, thin nose, not thin nose, or whatever—and not have it to do with race at all?” says Nelson.

Researchers need to imagine the possible applications of their work, particularly the ones that governments or institutions of power might use, says Nelson. Last year, the FBI released surveillance footage they took to monitor Black Lives Matter protests in Baltimore—whose state police department has been using facial recognition software since 2011. “As this work gets more technically complicated, it falls on researchers not just to do the technical work, but the ethical work as well,” Nelson says. In other words, the software in Snapchat—how could the cops use it?

Related Video

Business

Robots & Us: A Brief History of Our Robotic Future

Artificial intelligence and automation stand to upend nearly every aspect of modern life, from transportation to health care and even work. So how did we get here and where are we going?