Author: GETAWAYTHEBERKSHIRES

Home / Author: GETAWAYTHEBERKSHIRES

In the early ’00s, few web endeavors seemed less bound for long-term glory than CollegeHumor.com. The site launched in 1999 as a video and sight-gag repository “dedicated to grinding your academic efforts to a halt.” Early on, that meant lots of bro-friendly distractions, like photos of students passed out on lawns, naughtily titled JPEGs, and video series like “Husky Dave the Fat Guy”. There was enough low-brow, high-bandwidth material on CollegeHumor–and enough users eager to submit their own homemade juvenilia–that, at one point, the site kept a running list of high schools that had banned it from their classrooms.

But in the decades that followed, CollegeHumor’s users aged out of school–and so did the site, which began focusing less on campus hijinks, and more on office-space goofiness and even politics. Along the way, it built up a healthy YouTube following, with the official CollegeHumor channel alone claiming more than 13 million subscribers. And in recent years, following a relocation from New York City to Los Angeles, the company found success with TV shows like truTV’s Adam Ruins Everything. CollegeHumor became one of the web’s few legacy companies, surviving while numerous other web-comedy companies grind to a halt.

Now the long-running company–which has been majority-owned by media heavyweight IAC since 2006–is matriculating into the unpredictable subscription-service realm. Today CollegeHumor announces DROPOUT, a streaming platform that will serve up a mix of original videos, online comics, and chat stories. Available initially as a mobile-web offering, with an introductory price of $3.99 a month, DROPOUT marks a sort of declaration of independence for the company: Thanks to increased restrictions on YouTube, not to mention the audience-friendly demands of network TV, CollegeHumor was experiencing “a little creative repression,” says Sam Reich, the company’s Head of Video. “Now, we get to do whatever we want.”

Related Stories

DROPOUT’s initial slate features more than ten shows, including See Plum Rum, a school-election-themed revival of CollegeHumor’s popular Precious Plum series; the nerd-knowledge game show Um, Actually; and the dating series Lonely & Horny, featuring returning CollegeHumor stars Jake Hurwitz and Amir Blumenfeld. Also in the works is next year’s WTF 101, an animated program featuring a bunch of in-detention teens “learning the most fucked-up things about our world,” says Reich. “It's a show we couldn't do on TV, because it's way too R-rated.”

The hope is that the company’s more grown-up material–not to mention its decades-old fanbase–will help CollegeHumor succeed where several other streaming-service efforts have failed. Last year, the NBC-owned comedy site Seeso–which featured material from Saturday Night Live, as well as original shows like HarmonQuest–folded after less than two years. The Verizon-launched free service Go90, which featured a handful of comedy offerings, closed for good this summer–not long after the millennial-aimed upstart Fullscreen announced it was shutting down.

And at a time when Facebook is serving up an endless stream of personalized comedy videos, getting viewers commit to to a stand-alone service is riskier than ever. “If I can get funny videos on the internet for free, how does somebody like CollegeHumor break through?,” asks James McQuivey, principal analyst at Forrester Research. “The blue-humor angle gives them a way to rise above the noise. And I think that could work–at first.”

The bigger challenge for a service like DROPOUT, McQuivey says, is keeping users around after the initial few months of enthusiasm. “You have to produce original content at a high volume,” he says. “If people are only coming back once or twice a month, they won’t pay for it. They have to come back once or twice a day.”

Reich and his colleagues know long-time fans might balk at the idea of handing over a few bucks each month for DROPOUT. Yet they believe it’s a fair trade-off for CollegeHumor’s newfound freedom. Reich says conversations about an on-demand outlet began in late 2016, after a TV series CollegeHumor had been developing with a big network–Reich is prevented from saying which one–went belly-up. “I was in this vulnerable place,” says Reich. “We’d just done what I thought was the best pilot to ever come through our company, and it was summarily rejected.” Eventually, Reich says, “we all stopped and looked at each other and went, ‘How do we take back more ownership?’”

CollegeHumor isn’t halting its TV efforts: In addition to Adam Ruins Everything, the company produces the series Hot Date for Pop. But DROPOUT allows the company to circumvent the restrictions that are an inevitable part of the development process, as executives have to pay heed to advertisers’ wishes. And it gives CollegeHumor an alternative to YouTube. The company still releases an average of 3-4 new videos to YouTube a week. But recently, Reich says, the platform “has become less and less friendly a place to be even a little bit outrageous.”

That’s caused problems for some of CollegeHumor’s videos from the past year, including “Our Weirdest Sex Misconceptions” or “CH Does The Purge”–both of which were flagged by the service as being inappropriate for some viewers. Such restrictions make it harder for CollegeHumor to get those clips in front of viewers. According to Reich, YouTube’s algorithm “sometimes interprets a ‘comedy video about sex as being a ‘sex video.’” CollegeHumor can contest the ruling, but they don’t always wind up winning.

It’s not just YouTube’s recent crackdowns that have been a turn-off. Reich says the platform was never much of a money-maker for CollegeHumor. And for comedy creators, YouTube is hardly the eyeball-jackpot it used to be: Even four years ago, a CollegeHumor hit like “If Google Was a Guy” could go on to earn more than 40 million views–a number that seems impossible for any comedy sketch in 2019. “These days,” says Reich, “if a video gets over a million views, we consider that a hit.”

Ultimately, DROPOUT represents way for CollegeHumor to move toward a less YouTube-tied future-as well as an attempt to recapture the lawlessness of the web’s not-so-distant digital past. “It’s not the frattiness we’re trying to get back,” says Reich, who’s been with the company since 2006. “But ten years ago, the internet used to be a haven for creative experimentation.” To get that back, “we needed to create our own platform, so we aren't dependent on anyone else.” Just the people willing to pay for yet another subscription service.

The Future of Work: The Branch, by Eugene Lim

March 20, 2019 | Story | No Comments

“A library of the future might also be, at its best, a sanctuary where we are encouraged to spend entire hours looking at just one thing.” —Michael Agresta, “What Will Become of the Library?” Slate (2014)

The library of the future is more or less the same. That is, the branch is an actual and metaphoric Faraday cage. You enter, a node and a target, streamed at and pushed and yanked, penetrated by and extruding information, sloppy with it. And then your implants are cut off. Your watch, your glasses, jacket, underwear, your lenses, tablet, chips, your nanos—all go dry.

You’ve come to the library as usual out of desperation, yearning, boredom. There is a heart of uncertainty in your life, and you might wish to ask the library any number of questions: Should you take this job or that one? Won’t you ever get out of debt? Will he ever love you? Does she love you enough? Enough to leave her wife? Why, after all this time, did he show up again? Why can’t I sleep? I think my kid thinks I’m stupid. Why do I sleep so much? Why oh why am I so fucked up?

The librarian sits in a wooden chair, dressed in starched, sharply pressed clothing, muted colors. Today it’s the skinny dapper dude. You slightly prefer him to the short hairy man, but above all you like the zaftig disheveled woman—though, in fact, they are all remarkably similar: efficient, a sad vulnerability offset by an almost smug confidence in their training and knowledge, impersonal yet generous. These librarians of the future.

Eight sci-fi writers imagine the bold future of work.

Since this isn’t your first visit to the branch—you’re a regular—you can skip the usual orientations: the ritual data entry of blood type and genome sequence, the small pendulum and cutting of card deck, the opening up of palm and the tossing of yarrow. Those kinds of biometrics are for the newfangled anyway. Most of the time, here, it’s the more traditional talk therapy. What brings you in today? How did that make you feel? What were they like? Pretend she’s sitting in this chair.

“I got a weird call from my sister,” you say. “Her son is developing an eating disorder, and I wanted to tell her it’s because our mother was a monster and you’re becoming exactly the same … I never felt comfortable enough in my own skin … Always trying to please them, to please everyone, get them to like me … After we hung up, I wanted to eat the phone I was so mad …”

The librarian listens and prods and nods. Near the end, before you both rise, he repeats the usual admonitions, prayers, and liturgy. He says, “The infinite library, which is outside the library, is not the library. The world is everything that is the case. Relieve me of the bondage of self. The true library is human error, metonym, forgetting. To study the self is to forget the self. The library is not the map and is not the territory; the library is the map and the library is the territory. The empire never ended. It’s a small world after all …” You get tired of the mumbo jumbo but nonetheless respect the ritual.

SIGN UP TODAY

Sign up for the Daily newsletter and never miss the best of WIRED.

You finish the advisory interview with a tour. He takes your arm and guides you around the stacks. He points out a new Japanese crime novel, a recently published translation of a Uruguayan rapper’s lyrics, and a popular cookbook of Basque cuisine. As always, he says—before disappearing to his next appointment—the most important thing is to take the time to browse.

You do and find a new series of yaoi manga and a trashy history of the Russian Revolution. In an overstuffed leather armchair, you spend a few hours reading the Uruguayan rapper’s compositions. They are startling, and they articulate for you dense intergenerational griefs you hadn’t before known you’d been carrying. Looking up, you realize the afternoon is nearly over. You put the books in a bag and feel their promising weight. The clouded, unbodied versions of these are out there, weightless, in the infinite library, but you came here to have these minds manifested in the physical; virtual reality machines made out of printed voice; handheld AI instantiated by paper, cardboard, and reader response.

Your steps out of the library are careless with serenity. Then you exit the building and so are instantly hit with the packages, whoops, and floods. You recall and repeat the librarian’s words: The infinite library is not the library. The infinite library is not the true library. The true library is human error, metonym, forgetting. The infinite library, which is outside the library, is not the library. The true library is incomplete.


Eugene Lim (@lim_eugene) is the author, most recently, of Dear Cyborgs, and works as a high school librarian.

This article is part of The Future of Work from the January issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at [email protected].


  • Introduction: What'll We Do?
  • Real Girls by Laurie Penny
  • The Trustless by Ken Liu
  • Placebo by Charles Yu
  • The Farm by Charlie Jane Anders
  • The Third Petal by Nisi Shawl
  • Maximum Outflow by Adam Rogers
  • Compulsory by Martha Wells

Related Video

Business

Robots & Us: A Brief History of Our Robotic Future

Artificial intelligence and automation stand to upend nearly every aspect of modern life, from transportation to health care and even work. So how did we get here and where are we going?

When someone takes their own life, they leave behind an inheritance of unanswered questions. “Why did they do it?” “Why didn’t we see this coming?” “Why didn’t I help them sooner?” If suicide were easy to diagnose from the outside, it wouldn’t be the public health curse it is today. In 2014 suicide rates surged to a 30-year high in the US, making it now the second leading cause of death among young adults. But what if you could get inside someone’s head, to see when dark thoughts might turn to action?

That’s what scientists are now attempting to do with the help of brain scans and artificial intelligence. In a study published today in Nature Human Behavior, researchers at Carnegie Mellon and the University of Pittsburgh analyzed how suicidal individuals think and feel differently about life and death, by looking at patterns of how their brains light up in an fMRI machine. Then they trained a machine learning algorithm to isolate those signals—a frontal lobe flare at the mention of the word “death,” for example. The computational classifier was able to pick out the suicidal ideators with more than 90 percent accuracy. Furthermore, it was able to distinguish people who had actually attempted self-harm from those who had only thought about it.

Thing is, fMRI studies like this suffer from some well-known shortcomings. The study had a small sample size—34 subjects—so while the algorithm might excel at spotting particular blobs in this set of brains, it’s not obvious it would work as well in a broader population. Another dilemma that bedevils fMRI studies: Just because two things occur at the same time doesn’t prove one causes the other. And then there’s the whole taint of tautology to worry about; scientists decide certain parts of the brain do certain things, then when they observe a hand-picked set of triggers lighting them up, boom, confirmation.

In today’s study, the researchers started with 17 young adults between the ages of 18 and 30 who had recently reported suicidal ideation to their therapists. Then they recruited 17 neurotypical control participants and put them each inside an fMRI scanner. While inside the tube, subjects saw a random series of 30 words. Ten were generally positive, 10 were generally negative, and 10 were specifically associated with death and suicide. Then researchers asked the subjects to think about each word for three seconds as it showed up on a screen in front of them. “What does ‘trouble’ mean for you?” “What about ‘carefree,’ what’s the key concept there?” For each word, the researchers recorded the subjects' cerebral blood flow to find out which parts of their brains seemed to be at work.

Then they took those brain scans and fed them to a machine learning classifier. For each word, they told the algorithm which scans belonged to the suicidal ideators and which belonged to the control group, leaving one person at random out of the training set. Once it got good at telling the two apart, they gave it the left-out person. They did this for all 30 words, each time excluding one test subject. At the end, the classifier could reliably look at a scan and say whether or not that person had thought about killing themselves 91 percent of the time. To see how well it could more generally parse people, they then turned it on 21 additional suicidal ideators, who had been excluded from the main analyses because their brain scans had been too messy. Using the six most discriminating concepts—death, cruelty, trouble, carefree, good, and praise—the classifier spotted the ones who’d thought about suicide 87 percent of the time.

“The fact that it still performed well with noisier data tells us that the model is more broadly generalizable,” says Marcel Just, a psychologist at Carnegie Mellon and lead author on the paper. But he says the approach needs more testing to determine if it could successfully monitor or predict future suicide attempts. Comparing groups of individuals with and without suicide risk isn’t the same thing as holding up a brain scan and assigning its owner a likelihood of going through with it.

But that’s where this is all headed. Right now, the only way doctors can know if a patient is thinking of harming themselves is if they report it to a therapist, and many don’t. In a study of people who committed suicide either in the hospital or immediately following discharge, nearly 80 percent denied thinking about it to the last mental healthcare professional they saw. So there is a real need for better predictive tools. And a real opportunity for AI to fill that void. But probably not with fMRI data.

    More on Mental Health

  • Megan Molteni

    Artificial Intelligence Is Learning to Predict and Prevent Suicide

  • Megan Molteni

    The Chatbot Therapist Will See You Now

  • Robbie Gonzalez

    Virtual Therapists Help Veterans Open Up About PTSD

It’s just not practical. The scans can cost a few thousand dollars, and insurers only cover them if there is a valid clinical reason to do so. That is, if a doctor thinks the only way to diagnose what’s wrong with you is to stick you in a giant magnet. While plenty of neuroscience papers make use of fMRI, in the clinic, the imaging procedure is reserved for very rare cases. Most hospitals aren’t equipped with the machinery, for that very reason. Which is why Just is planning to replicate the study—but with patients wearing electronic sensors on their head while they're in the tube. Electroencephalograms, or EEGs, are one hundredth the price of fMRI equipment. The idea is to tie predictive brain scan signals to corresponding EEG readouts, so that doctors can use the much cheaper test to identify high-risk patients.

Other scientists are already mining more accessible kinds of data to find telltale signatures of impending suicide. Researchers at Florida State and Vanderbilt recently trained a machine learning algorithm on 3,250 electronic medical records for people who had attempted suicide sometime in the last 20 years. It identifies people not by their brain activity patterns, but by things like age, sex, prescriptions, and medical history. And it correctly predicts future suicide attempts about 85 percent of the time.

“As a practicing doctor, none of those things on their own might pop out to me, but the computer can spot which combinations of features are predictive of suicide risk,” says Colin Walsh, an internist and clinical informatician at Vanderbilt who’s working to turn the algorithm he helped develop into a monitoring tool doctors and other healthcare professionals in Nashville can use to keep tabs on patients. “To actually get used it’s got to revolve around data that’s already routinely collected. No new tests. No new imaging studies. We’re looking at medical records because that’s where so much medical care is already delivered.”

And others are mining data even further upstream. Public health researchers are poring over Google searches for evidence of upticks in suicidal ideation. Facebook is scanning users’ wall posts and live videos for combinations of words that suggest a risk of self-harm. The VA is currently piloting an app that passively picks up vocal cues that can signal depression and mood swings. Verily is looking for similar biomarkers in smart watches and blood draws. The goal for all these efforts is to reach people where they are—on the internet and social media—instead of waiting for them to walk through a hospital door or hop in an fMRI tube.

Related Video

Technology

The Robot Will See You Now – AI and Health Care

Artificial intelligence is now detecting cancer and robots are doing nursing tasks. But are there risks to handing over elements of our health to machines, no matter how sophisticated?

You want the real windows into someone's soul? Look at their Reddit subscriptions. It's all there: their passions, their hobbies, their ideological leanings, their love of terrible haircuts and sublime anonymized cringe. And if they're anything like me, those subscriptions also tell the tale of a life spent diving down rabbit holes.

Origami. Board games. Trail running. Pens. Cycling. Mechanical keyboards. Scrabble. (I know. God, I know. There are jokes to be made here. Trust that I've already made them all myself.) Whenever my interest attaches itself to a new thing—which has happened my entire life, cyclically and all-encompassingly—I tend to develop a singular, insatiable appetite for information about that thing. Hey, you know what the internet is really good at? Enabling singular, insatiable appetites.

Especially since 2005. That's the year Reddit and YouTube launched within months of each other, and obsession became centralized. You had options before that, blogs and message boards and Usenet forums, but they weren't exactly magnets of cross-pollination. They didn't fully open the floodgates to minute details and the masses yearning to pore over them. Then, on opposite sides of the country, two different small groups of twentysomething dudes created twin engines of infatuation. Between their massive tents and their ease of use, Reddit and YouTube tore away the guardrail that had always stood between serial hobbyists and oblivion.

Related Stories

For all the hand-wringing about both sites—YouTube's gameable recommendation algorithm that can radicalize dummies at the drop of a meme, Reddit's chelonian foot speed when dealing with bad actors and hate speech in the more noisome subreddits—both are incredible resources for the participatory realm. Watching more experienced people do what you're trying to do, sharing setups and techniques, even getting support and commiseration from those who are similarly, rapturously afloat in the same thing you can't stop reading and thinking about: It's not just a recipe for intellectual indulgence, but for improvement as well. (On YouTube, that value comes from the creator; on Reddit, it comes from the comments. Swap the two at your own peril.)

Rabbit holes are what make Beauty YouTube such a colossus, why the Ask Science subreddit has 16 million subscribers. But they also hold a secret: The deeper you go, the tighter it gets. That's because a rabbit hole is a filter bubble of sorts, albeit one that's labeled as such and explicitly opted into—you're there because you're interested in this Thing, as is everyone else, and under such celebratory scrutiny that Thing distends, its perceived stature far outweighing its real-life impact. Just because there are a million opinions about something doesn't make it important to anyone outside the bubble, let alone crucial.

And before long, orthodoxy rears its head. Want to make coffee? Oh, you're going to need to spend hours dialing in the grind on your $1,000 Mazzer Mini E before pouring 205-degree water over it from your gooseneck kettle. Don't forget to account for the bloom! Want to get a new keyboard that feels better and looks nicer than your laptop's? Great, but Topre switches or GTFO. Oh, and don't stop at one. Or two. Or 17.

Don't get me wrong. I'm a collector. I love the right tool for the right job, and I love research even more. (I'm really fucking weird about my pens.) But more than once I've become consumed by the idea that my experience with a Thing will be utterly transformed if I just treat myself to the right running vest. Or digital temperature regulator for an espresso machine. Or, yes, Scrabble-themed keycaps. That's not the joy of collecting; it's the expectation of fulfillment. I watch video reviews, or read people waxing rhapsodic, and it changes my Thing from a learning process, an intrinsic enjoyment, to a preamble. There's an "endgame"; there are "grails." Get the grail, and you're in the endgame.

But there's no endgame, and there's no grail. There's no bottom to the rabbit hole.

What there is is learning more about a thing you like to do, and maybe getting better at it. Running longer. Enjoying the feel of your pen on paper. Playing a game with friends. Everything else is just a commercial. So jump into all the rabbit holes you want—just don't expect to find Wonderland.


How We Learn: Read More

Related Video

Culture

Inside the YouTube-Fueled, Teenage Extravaganza That Is Beautycon

A look at the industry-shaking in real life meet up of beauty world influencers, their fans and the brands that compete for their attention.

Scientists have been using quantum theory for almost a century now, but embarrassingly they still don’t know what it means. An informal poll taken at a 2011 conference on Quantum Physics and the Nature of Reality showed that there’s still no consensus on what quantum theory says about reality—the participants remained deeply divided about how the theory should be interpreted.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Some physicists just shrug and say we have to live with the fact that quantum mechanics is weird. So particles can be in two places at once, or communicate instantaneously over vast distances? Get over it. After all, the theory works fine. If you want to calculate what experiments will reveal about subatomic particles, atoms, molecules and light, then quantum mechanics succeeds brilliantly.

But some researchers want to dig deeper. They want to know why quantum mechanics has the form it does, and they are engaged in an ambitious program to find out. It is called quantum reconstruction, and it amounts to trying to rebuild the theory from scratch based on a few simple principles.

If these efforts succeed, it’s possible that all the apparent oddness and confusion of quantum mechanics will melt away, and we will finally grasp what the theory has been trying to tell us. “For me, the ultimate goal is to prove that quantum theory is the only theory where our imperfect experiences allow us to build an ideal picture of the world,” said Giulio Chiribella, a theoretical physicist at the University of Hong Kong.

There’s no guarantee of success—no assurance that quantum mechanics really does have something plain and simple at its heart, rather than the abstruse collection of mathematical concepts used today. But even if quantum reconstruction efforts don’t pan out, they might point the way to an equally tantalizing goal: getting beyond quantum mechanics itself to a still deeper theory. “I think it might help us move towards a theory of quantum gravity,” said Lucien Hardy, a theoretical physicist at the Perimeter Institute for Theoretical Physics in Waterloo, Canada.

The Flimsy Foundations of Quantum Mechanics

The basic premise of the quantum reconstruction game is summed up by the joke about the driver who, lost in rural Ireland, asks a passer-by how to get to Dublin. “I wouldn’t start from here,” comes the reply.

Where, in quantum mechanics, is “here”? The theory arose out of attempts to understand how atoms and molecules interact with light and other radiation, phenomena that classical physics couldn’t explain. Quantum theory was empirically motivated, and its rules were simply ones that seemed to fit what was observed. It uses mathematical formulas that, while tried and trusted, were essentially pulled out of a hat by the pioneers of the theory in the early 20th century.

Take Erwin Schrödinger’s equation for calculating the probabilistic properties of quantum particles. The particle is described by a “wave function” that encodes all we can know about it. It’s basically a wavelike mathematical expression, reflecting the well-known fact that quantum particles can sometimes seem to behave like waves. Want to know the probability that the particle will be observed in a particular place? Just calculate the square of the wave function (or, to be exact, a slightly more complicated mathematical term), and from that you can deduce how likely you are to detect the particle there. The probability of measuring some of its other observable properties can be found by, crudely speaking, applying a mathematical function called an operator to the wave function.

But this so-called rule for calculating probabilities was really just an intuitive guess by the German physicist Max Born. So was Schrödinger’s equation itself. Neither was supported by rigorous derivation. Quantum mechanics seems largely built of arbitrary rules like this, some of them—such as the mathematical properties of operators that correspond to observable properties of the system—rather arcane. It’s a complex framework, but it’s also an ad hoc patchwork, lacking any obvious physical interpretation or justification.

Compare this with the ground rules, or axioms, of Einstein’s theory of special relativity, which was as revolutionary in its way as quantum mechanics. (Einstein launched them both, rather miraculously, in 1905.) Before Einstein, there was an untidy collection of equations to describe how light behaves from the point of view of a moving observer. Einstein dispelled the mathematical fog with two simple and intuitive principles: that the speed of light is constant, and that the laws of physics are the same for two observers moving at constant speed relative to one another. Grant these basic principles, and the rest of the theory follows. Not only are the axioms simple, but we can see at once what they mean in physical terms.

What are the analogous statements for quantum mechanics? The eminent physicist John Wheeler once asserted that if we really understood the central point of quantum theory, we would be able to state it in one simple sentence that anyone could understand. If such a statement exists, some quantum reconstructionists suspect that we’ll find it only by rebuilding quantum theory from scratch: by tearing up the work of Bohr, Heisenberg and Schrödinger and starting again.

Quantum Roulette

One of the first efforts at quantum reconstruction was made in 2001 by Hardy, then at the University of Oxford. He ignored everything that we typically associate with quantum mechanics, such as quantum jumps, wave-particle duality and uncertainty. Instead, Hardy focused on probability: specifically, the probabilities that relate the possible states of a system with the chance of observing each state in a measurement. Hardy found that these bare bones were enough to get all that familiar quantum stuff back again.

Hardy assumed that any system can be described by some list of properties and their possible values. For example, in the case of a tossed coin, the salient values might be whether it comes up heads or tails. Then he considered the possibilities for measuring those values definitively in a single observation. You might think any distinct state of any system can always be reliably distinguished (at least in principle) by a measurement or observation. And that’s true for objects in classical physics.

In quantum mechanics, however, a particle can exist not just in distinct states, like the heads and tails of a coin, but in a so-called superposition—roughly speaking, a combination of those states. In other words, a quantum bit, or qubit, can be not just in the binary state of 0 or 1, but in a superposition of the two.

But if you make a measurement of that qubit, you’ll only ever get a result of 1 or 0. That is the mystery of quantum mechanics, often referred to as the collapse of the wave function: Measurements elicit only one of the possible outcomes. To put it another way, a quantum object commonly has more options for measurements encoded in the wave function than can be seen in practice.

Hardy’s rules governing possible states and their relationship to measurement outcomes acknowledged this property of quantum bits. In essence the rules were (probabilistic) ones about how systems can carry information and how they can be combined and interconverted.

Hardy then showed that the simplest possible theory to describe such systems is quantum mechanics, with all its characteristic phenomena such as wavelike interference and entanglement, in which the properties of different objects become interdependent. “Hardy’s 2001 paper was the ‘Yes, we can!’ moment of the reconstruction program,” Chiribella said. “It told us that in some way or another we can get to a reconstruction of quantum theory.”

More specifically, it implied that the core trait of quantum theory is that it is inherently probabilistic. “Quantum theory can be seen as a generalized probability theory, an abstract thing that can be studied detached from its application to physics,” Chiribella said. This approach doesn’t address any underlying physics at all, but just considers how outputs are related to inputs: what we can measure given how a state is prepared (a so-called operational perspective). “What the physical system is is not specified and plays no role in the results,” Chiribella said. These generalized probability theories are “pure syntax,” he added — they relate states and measurements, just as linguistic syntax relates categories of words, without regard to what the words mean. In other words, Chiribella explained, generalized probability theories “are the syntax of physical theories, once we strip them of the semantics.”

The general idea for all approaches in quantum reconstruction, then, is to start by listing the probabilities that a user of the theory assigns to each of the possible outcomes of all the measurements the user can perform on a system. That list is the “state of the system.” The only other ingredients are the ways in which states can be transformed into one another, and the probability of the outputs given certain inputs. This operational approach to reconstruction “doesn’t assume space-time or causality or anything, only a distinction between these two types of data,” said Alexei Grinbaum, a philosopher of physics at the CEA Saclay in France.

To distinguish quantum theory from a generalized probability theory, you need specific kinds of constraints on the probabilities and possible outcomes of measurement. But those constraints aren’t unique. So lots of possible theories of probability look quantum-like. How then do you pick out the right one?

“We can look for probabilistic theories that are similar to quantum theory but differ in specific aspects,” said Matthias Kleinmann, a theoretical physicist at the University of the Basque Country in Bilbao, Spain. If you can then find postulates that select quantum mechanics specifically, he explained, you can “drop or weaken some of them and work out mathematically what other theories appear as solutions.” Such exploration of what lies beyond quantum mechanics is not just academic doodling, for it’s possible—indeed, likely—that quantum mechanics is itself just an approximation of a deeper theory. That theory might emerge, as quantum theory did from classical physics, from violations in quantum theory that appear if we push it hard enough.

Bits and Pieces

Some researchers suspect that ultimately the axioms of a quantum reconstruction will be about information: what can and can’t be done with it. One such derivation of quantum theory based on axioms about information was proposed in 2010 by Chiribella, then working at the Perimeter Institute, and his collaborators Giacomo Mauro D’Ariano and Paolo Perinotti of the University of Pavia in Italy. “Loosely speaking,” explained Jacques Pienaar, a theoretical physicist at the University of Vienna, “their principles state that information should be localized in space and time, that systems should be able to encode information about each other, and that every process should in principle be reversible, so that information is conserved.” (In irreversible processes, by contrast, information is typically lost—just as it is when you erase a file on your hard drive.)

What’s more, said Pienaar, these axioms can all be explained using ordinary language. “They all pertain directly to the elements of human experience, namely, what real experimenters ought to be able to do with the systems in their laboratories,” he said. “And they all seem quite reasonable, so that it is easy to accept their truth.” Chiribella and his colleagues showed that a system governed by these rules shows all the familiar quantum behaviors, such as superposition and entanglement.

One challenge is to decide what should be designated an axiom and what physicists should try to derive from the axioms. Take the quantum no-cloning rule, which is another of the principles that naturally arises from Chiribella’s reconstruction. One of the deep findings of modern quantum theory, this principle states that it is impossible to make a duplicate of an arbitrary, unknown quantum state.

It sounds like a technicality (albeit a highly inconvenient one for scientists and mathematicians seeking to design quantum computers). But in an effort in 2002 to derive quantum mechanics from rules about what is permitted with quantum information, Jeffrey Bub of the University of Maryland and his colleagues Rob Clifton of the University of Pittsburgh and Hans Halvorson of Princeton University made no-cloning one of three fundamental axioms. One of the others was a straightforward consequence of special relativity: You can’t transmit information between two objects more quickly than the speed of light by making a measurement on one of the objects. The third axiom was harder to state, but it also crops up as a constraint on quantum information technology. In essence, it limits how securely a bit of information can be exchanged without being tampered with: The rule is a prohibition on what is called “unconditionally secure bit commitment.”

These axioms seem to relate to the practicalities of managing quantum information. But if we consider them instead to be fundamental, and if we additionally assume that the algebra of quantum theory has a property called non-commutation, meaning that the order in which you do calculations matters (in contrast to the multiplication of two numbers, which can be done in any order), Clifton, Bub and Halvorson have shown that these rules too give rise to superposition, entanglement, uncertainty, nonlocality and so on: the core phenomena of quantum theory.

Another information-focused reconstruction was suggested in 2009 by Borivoje Dakić and Časlav Brukner, physicists at the University of Vienna. They proposed three “reasonable axioms” having to do with information capacity: that the most elementary component of all systems can carry no more than one bit of information, that the state of a composite system made up of subsystems is completely determined by measurements on its subsystems, and that you can convert any “pure” state to another and back again (like flipping a coin between heads and tails).

Dakić and Brukner showed that these assumptions lead inevitably to classical and quantum-style probability, and to no other kinds. What’s more, if you modify axiom three to say that states get converted continuously—little by little, rather than in one big jump—you get only quantum theory, not classical. (Yes, it really is that way round, contrary to what the “quantum jump” idea would have you expect—you can interconvert states of quantum spins by rotating their orientation smoothly, but you can’t gradually convert a classical heads to a tails.) “If we don’t have continuity, then we don’t have quantum theory,” Grinbaum said.

A further approach in the spirit of quantum reconstruction is called quantum Bayesianism, or QBism. Devised by Carlton Caves, Christopher Fuchs and Rüdiger Schack in the early 2000s, it takes the provocative position that the mathematical machinery of quantum mechanics has nothing to do with the way the world really is; rather, it is just the appropriate framework that lets us develop expectations and beliefs about the outcomes of our interventions. It takes its cue from the Bayesian approach to classical probability developed in the 18th century, in which probabilities stem from personal beliefs rather than observed frequencies. In QBism, quantum probabilities calculated by the Born rule don’t tell us what we’ll measure, but only what we should rationally expect to measure.

In this view, the world isn’t bound by rules—or at least, not by quantum rules. Indeed, there may be no fundamental laws governing the way particles interact; instead, laws emerge at the scale of our observations. This possibility was considered by John Wheeler, who dubbed the scenario Law Without Law. It would mean that “quantum theory is merely a tool to make comprehensible a lawless slicing-up of nature,” said Adán Cabello, a physicist at the University of Seville. Can we derive quantum theory from these premises alone?

“At first sight, it seems impossible,” Cabello admitted—the ingredients seem far too thin, not to mention arbitrary and alien to the usual assumptions of science. “But what if we manage to do it?” he asked. “Shouldn’t this shock anyone who thinks of quantum theory as an expression of properties of nature?”

Making Space for Gravity

In Hardy’s view, quantum reconstructions have been almost too successful, in one sense: Various sets of axioms all give rise to the basic structure of quantum mechanics. “We have these different sets of axioms, but when you look at them, you can see the connections between them,” he said. “They all seem reasonably good and are in a formal sense equivalent because they all give you quantum theory.” And that’s not quite what he’d hoped for. “When I started on this, what I wanted to see was two or so obvious, compelling axioms that would give you quantum theory and which no one would argue with.”

So how do we choose between the options available? “My suspicion now is that there is still a deeper level to go to in understanding quantum theory,” Hardy said. And he hopes that this deeper level will point beyond quantum theory, to the elusive goal of a quantum theory of gravity. “That’s the next step,” he said. Several researchers working on reconstructions now hope that its axiomatic approach will help us see how to pose quantum theory in a way that forges a connection with the modern theory of gravitation—Einstein’s general relativity.

Look at the Schrödinger equation and you will find no clues about how to take that step. But quantum reconstructions with an “informational” flavor speak about how information-carrying systems can affect one another, a framework of causation that hints at a link to the space-time picture of general relativity. Causation imposes chronological ordering: An effect can’t precede its cause. But Hardy suspects that the axioms we need to build quantum theory will be ones that embrace a lack of definite causal structure—no unique time-ordering of events—which he says is what we should expect when quantum theory is combined with general relativity. “I’d like to see axioms that are as causally neutral as possible, because they’d be better candidates as axioms that come from quantum gravity,” he said.

Hardy first suggested that quantum-gravitational systems might show indefinite causal structure in 2007. And in fact only quantum mechanics can display that. While working on quantum reconstructions, Chiribella was inspired to propose an experiment to create causal superpositions of quantum systems, in which there is no definite series of cause-and-effect events. This experiment has now been carried out by Philip Walther’s lab at the University of Vienna—and it might incidentally point to a way of making quantum computing more efficient.

“I find this a striking illustration of the usefulness of the reconstruction approach,” Chiribella said. “Capturing quantum theory with axioms is not just an intellectual exercise. We want the axioms to do something useful for us—to help us reason about quantum theory, invent new communication protocols and new algorithms for quantum computers, and to be a guide for the formulation of new physics.”

But can quantum reconstructions also help us understand the “meaning” of quantum mechanics? Hardy doubts that these efforts can resolve arguments about interpretation—whether we need many worlds or just one, for example. After all, precisely because the reconstructionist program is inherently “operational,” meaning that it focuses on the “user experience”—probabilities about what we measure—it may never speak about the “underlying reality” that creates those probabilities.

“When I went into this approach, I hoped it would help to resolve these interpretational problems,” Hardy admitted. “But I would say it hasn’t.” Cabello agrees. “One can argue that previous reconstructions failed to make quantum theory less puzzling or to explain where quantum theory comes from,” he said. “All of them seem to miss the mark for an ultimate understanding of the theory.” But he remains optimistic: “I still think that the right approach will dissolve the problems and we will understand the theory.”

Maybe, Hardy said, these challenges stem from the fact that the more fundamental description of reality is rooted in that still undiscovered theory of quantum gravity. “Perhaps when we finally get our hands on quantum gravity, the interpretation will suggest itself,” he said. “Or it might be worse!”

    More Quanta

  • Megan Molteni

    Harvey Evacuees Leave Their Belongings—and Health Records—Behind

  • Natalie Wolchover

    The Man Who's Trying to Kill Dark Matter

  • Frank Wilczek

    Your Simple (Yes, Simple) Guide to Quantum Entanglement

Right now, quantum reconstruction has few adherents—which pleases Hardy, as it means that it’s still a relatively tranquil field. But if it makes serious inroads into quantum gravity, that will surely change. In the 2011 poll, about a quarter of the respondents felt that quantum reconstructions will lead to a new, deeper theory. A one-in-four chance certainly seems worth a shot.

Grinbaum thinks that the task of building the whole of quantum theory from scratch with a handful of axioms may ultimately be unsuccessful. “I’m now very pessimistic about complete reconstructions,” he said. But, he suggested, why not try to do it piece by piece instead—to just reconstruct particular aspects, such as nonlocality or causality? “Why would one try to reconstruct the entire edifice of quantum theory if we know that it’s made of different bricks?” he asked. “Reconstruct the bricks first. Maybe remove some and look at what kind of new theory may emerge.”

“I think quantum theory as we know it will not stand,” Grinbaum said. “Which of its feet of clay will break first is what reconstructions are trying to explore.” He thinks that, as this daunting task proceeds, some of the most vexing and vague issues in standard quantum theory—such as the process of measurement and the role of the observer—will disappear, and we’ll see that the real challenges are elsewhere. “What is needed is new mathematics that will render these notions scientific,” he said. Then, perhaps, we’ll understand what we’ve been arguing about for so long.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Related Video

Business

What the What Is Quantum Computing? We've Got You Covered

Thanks to the superposition principle, a quantum machine has the potential to become an exponentially more powerful computer. If that makes little sense to you, here's quantum computing explained.

Tech companies are eyeing the next frontier: the human face. Should you desire, you can now superimpose any variety of animal snouts onto a video of yourself in real time. If you choose to hemorrhage money on the new iPhone X, you can unlock your smartphone with a glance. At a KFC location in Hangzhou, China, you can even pay for a chicken sandwich by smiling at a camera. And at least one in four police departments in the US have access to facial recognition software to help them identify suspects.

But the tech isn’t perfect. Your iPhone X might not always unlock; a cop might arrest the wrong person. In order for software to always recognize your face as you, an entire sequence of algorithms has to work. First, the software has to be able to determine whether an image has a face in it at all. If you’re a cop trying to find a missing kid in a photo of a crowd, you might want the software to sort the faces by age. And ultimately, you need an algorithm that can compare each face with another photo in a database, perhaps with different lighting and at a different angle, and determine whether they’re the same person.

To improve these algorithms, researchers have found themselves using the tools of pollsters and social scientists: demographics. When they teach face recognition software about race, gender, and age, it can often perform certain tasks better. “This is not a surprising result,” says biometrics researcher Anil Jain of Michigan State University, “that if you model subpopulations separately you’ll get better results.” With better algorithms, maybe that cop won’t arrest the wrong person. Great news for everybody, right?

It’s not so simple. Demographic data may contribute to algorithms’ accuracy, but it also complicates their use.

Take a recent example. Researchers based at the University of Surrey in the UK and Jiangnan University in China were trying to improve an algorithm used in specific facial recognition applications. The algorithm, based on something called a 3-D morphable model, digitally converts a selfie into a 3-D head in less than a second. Model in hand, you can use it rotate the angle of someone’s selfie, for example, to compare it to another photograph. The iPhone X and Snapchat use similar 3-D models.

The researchers gave their algorithm some basic instructions: Here’s a template of a head, and here’s the ability to stretch or compress it to get the 2-D image to drape over it as smoothly as possible. The template they used is essentially the average human face—average nose length, average pupil distance, average cheek diameter, calculated from 3-D scans they took of real people. When people made these models in the past, it was hard to collect a lot of scans because they’re time-consuming. So frequently, they’d just lump all their data together and calculate an average face, regardless of race, gender, or age.

The group used a database of 942 faces—3-D scans collected in the UK and in China—to make their template. But instead of calculating the average of all 942 faces at once, they categorized the face data by race. They made separate templates for each race—an average Asian face, white face, and black face, and based their algorithm on these three templates. And even though they had only 10 scans of black faces—they had 100 white faces and over 800 Asian faces—they found that their algorithm generated a 3-D model that matched a real person’s head better than the previous one-template model.

“It’s not only for race,” says computer scientist Zhenhua Feng of the University of Surrey. “If you have a model for an infant, you can construct an infant’s 3-D face better. If you have a model for an old person, you can construct that type of 3-D face better.” So if you teach biometric software explicitly about social categories, it does a better job.

Feng’s particular 3-D models are a niche algorithm in facial recognition, says Jain—the trendy algorithms right now use 2-D photos because 3-D face data is hard to work with. But other more widespread techniques also lump people into categories to improve their performance. A more common 3-D face model, known as a person-specific model, also often uses face templates. Depending on whether the person in the picture is a man, woman, infant, or an elderly person, the algorithm will start with a different template. For specific 2-D machine learning algorithms that verify that two photographs contain the same person, researchers have demonstrated that if you break down different appearance attributes—gender, race, but also eye color, expression—it will also perform more accurately.

    More on Bias in AI

  • Scott Rosenberg

    Why AI Is Still Waiting For Its Ethics Transplant

  • Sophia Chen

    AI Research Is in Desperate Need of an Ethical Watchdog

  • Megan Garcia

    How to Keep Your AI From Turning Into a Racist Monster

So if you teach an algorithm about race, does that make it racist? Not necessarily, says sociologist Alondra Nelson of Columbia University, who studies the ethics of new technologies. Social scientists categorize data using demographic information all the time, in response to how society has already structured itself. For example, sociologists often analyze behaviors along gender or racial lines. “We live in a world that uses race for everything,” says Nelson. “I don’t understand the argument that we’re not supposed to here.” Existing databases—the FBI’s face depository, and the census—already stick people in predetermined boxes, so if you want an algorithm to work with these databases, you’ll have to use those categories.

However, Nelson points out, it’s important that computer scientists think through why they’ve chosen to use race over other categories. It’s possible that other variables with less potential for discrimination or bias would be just as effective.“Would it be OK to pick categories like, blue eyes, brown eyes, thin nose, not thin nose, or whatever—and not have it to do with race at all?” says Nelson.

Researchers need to imagine the possible applications of their work, particularly the ones that governments or institutions of power might use, says Nelson. Last year, the FBI released surveillance footage they took to monitor Black Lives Matter protests in Baltimore—whose state police department has been using facial recognition software since 2011. “As this work gets more technically complicated, it falls on researchers not just to do the technical work, but the ethical work as well,” Nelson says. In other words, the software in Snapchat—how could the cops use it?

Related Video

Business

Robots & Us: A Brief History of Our Robotic Future

Artificial intelligence and automation stand to upend nearly every aspect of modern life, from transportation to health care and even work. So how did we get here and where are we going?

A New Captain Marvel Trailer Is Coming Tonight

March 20, 2019 | Story | No Comments

It's time once again to turn on The Monitor, WIRED's roundup of the latest in the world of culture, from box-office news to announcements about hot new trailers. In today's installment: Captain Marvel readies for lift-off; Stephen King signs up for HBO; and Marvel breaks new ground.

She Is the Captain Now

Marvel will debut the next and perhaps final, full trailer for Captain Marvel tonight during ESPN's Monday Night Football game between the San Junipero Jawas and the Trouble City Tribbles (those are actual sports teams, right?) The movie, which stars Brie Larson as the titular good-doer, arrives next year. Watch for the trailer on WIRED later today. And speaking of all things Marvel…

'Master' Plan

…the studio has announced a big-screen stand-alone film following Shang-Chi, the Asian-American superhero (and occasional Avenger) who was introduced in the 1970s, and hailed as "The Master of Kung Fu." The Shang-Chi script will be written by Dave Callaham, who wrote next year's Wonder Woman 1984, and is basically working on every movie you'll be watching in the next two years. No release date or plot details for Shang-Chi are known yet, but Marvel is reportedly fast-tracking the film so expect more updates soon.

Because the Internet

It wasn't quite a slaughter race at the box office last weekend, with Disney's Ralph Breaks the Internet easily topping the chart once again, earning more than $25 million. The hit animated film was followed by such weeks-old hits as The Grinch, Creed II, Fantastic Beasts: The Crimes of Grindelwald, and Bohemian Rhapsody, the latter of which has now made half a billion worldwide. But the perch wasn't Ralph's only weekend victory: It was also nominated in the Best Animated Feature category for the year's Annie Awards, alongside such films as Isle of Dogs and Spider-Man: Into the Spider-Verse.

King's Things

HBO is turning Stephen King's recent horror-procedural hit The Outsider into a series. The author's 7,863rd bestseller—about a Midwest murder investigation that bleeds into the realm of the supernatural—is being overseen for the small screen by Jason Bateman, who will direct two episodes and produce. Emmy winner (and WIRED favorite) Ben Mendelsohn will star, adding to his roster of dark-hearted tales, which includes everything from Animal Kingdom to Netflix's Bloodline to Rogue One: A Star Wars Story, in which he stared down a deadly Darth Vader pun.

This story originally appeared on CityLab and is part of the Climate Desk collaboration.

If you want an unusual but punchy telling of the world’s explosion of climate-warping gases, look no further than this visualization of CO2 levels over the past centuries soaring like skyscrapers into space.

2A Brief History of CO2 Emissions” portrays the cumulative amount of this common greenhouse gas that humans have produced since the mid-1700s. It also projects to the end of the 21st century to show what might happen if the world disregards the Paris Agreement, an ambitious effort to limit warming that 200 countries signed onto in 2015. (President Donald Trump still wants to renege on it.) At this point, the CO2-plagued atmosphere could see jumps in average temperature as high as 6 to 9 degrees Fahrenheit, the animation’s narrator warns, displaying a model of Earth looking less like planet than porcupine.

“We wanted to show where and when CO2 was emitted in the last 250 years—and might be emitted in the coming 80 years if no climate action is taken,” emails Boris Mueller, a creator of the viz along with designer Julian Braun and others at Germany’s University of Applied Sciences Potsdam and the Potsdam Institute for Climate Impact Research. “By visualizing the global distribution and the local amount of cumulated CO2, we were able to create a strong image that demonstrates very clearly the dominant CO2-emitting regions and time spans.”

The visualization begins with a small, white lump growing on London around 1760—the start of the Industrial Revolution. More white dots quickly appear throughout Europe, rising prominently in Paris and Brussels in the mid-1800s, then throughout Asia and the US, where in the early 1900s emissions skyrocket in the New York region, Chicago, and Southern California.

By the time the present day rolls around, the world looks home to the biggest construction project in existence, with spires that’d put the Burj Khalifa to shame ascending in the US, China, and Europe—currently the worst emitters in terms of volume of CO2.

For this project, the team pulled historical data from the US Department of Energy-affiliated Carbon Dioxide Information Analysis Center. The “CO2 emission estimates are deduced from information on the amount and location of fossil-fuel combustion and cement production over time,” says Elmar Kriegler, the viz’s scientific lead. “Therefore, the visualization also tells the history of the Industrial Revolution which started in England, spread across Europe and the United States, and finally across the world.”

Astute observers will notice a couple of troubling things, such as the huge amount of emissions pouring out of urban areas like London, New York, and Tokyo. Cities and the power plants that keep them humming remain the world’s largest source of anthropogenic greenhouse gases. Also notable: the relative absence of emissions in some parts of the planet. That isn’t necessarily a good thing. “Some regions, in particular Africa, still do not show a significant cumulative CO2-emissions signal,” says Kriegler, “highlighting that they are still in the beginning of industrialization and may increase their emissions rapidly in the future, if they follow the path of Europe, the U.S., Japan, and recently China and Southeast Asia.”

How likely is it the worst-case scenario portrayed in this viz is nearing our doorstep? The viz’s creators argue that some current damage is here to stay. But they have some cause for optimism, too. “Reducing CO2 emissions to zero in the second half of the century can be achieved with decisive, global-scale emissions-reductions policies and efforts,” Kriegler says. “The Paris Agreement can be an important [catalyst] for this development if embraced fully by the world’s leading emitters and powers. But as we say in the movie, the time to act is now.”

Related Video

Science

How Climate Change Is Already Affecting Earth

Though the planet has only warmed by one-degree Celsius since the Industrial Revolution, climate change's effect on earth has been anything but subtle. Here are some of the most astonishing developments over the past few years.

Welp, 2018 is going out with a bang. In the last week, America got a reminder that Russia hacked the 2016 US election by hijacking social media; acting attorney general Matthew Whitaker rejected legal advice to recuse himself from overseeing Special Counsel Robert Mueller's probe; drones attacked British airports; and California dealt with potential UFOs. Actually, considering how the rest of the year has gone, that's not much of a bang at all—just a standard week in 2018. But what else are people talking about on this wreck that is the internet? Let's find out, shall we?

Trump's Big Move

What Happened: President Trump announced the US would be pulling troops out of Syria, leading to some instability, to say the least.

What Really Happened: Trump's surprise holiday gift to the Middle East arrived early Wednesday, as reports surfaced suggesting that the United States was about to withdraw troops from Syria. Those reports were soon confirmed via Twitter, because of course.

No, wait; I mean these tweets—but please remember that Trump announced that the US has defeated ISIS all the same.

It was, to put things mildly, not a popular decision, even within Trump's own, traditionally kowtowing-no-matter-what party.

The decision came as a surprise to many, with a lot of people unsure how, exactly, the decision had been reached, especially considering the president’s own national security team was apparently against it. Others believed that he had a pretty good idea.

So, if his own defense secretary had no say, who exactly was consulted?

OK, sure; for any other administration, that would seem like a wild conspiracy theory. However, when you look at who benefits from this decision, you do start to wonder just a little

Funny thing about those actually arguing in favor of this move: the president doesn’t seem to be aware that it's happening, judging by his public statements.

Wait. They have to fight ISIS? Wasn't ISIS defeated, according to a tweet made by exactly the same person just a day before? Man, international politics moves so quickly these days.

The Takeaway: An unexpected casualty of the decision might point to larger problems with Trump's attitude towards geopolitics: Defense Secretary Jim Mattis resigned Thursday over the matter, penning a letter that makes his feelings on the matter clear.

The Incomplete Sentencing of Michael Flynn

What Happened: Just in case anyone forgot: There's still an investigation into potentially illegal activity surrounding the presidential campaign of the man currently in the White House, and it's continuing to bear strange, surreal fruit.

What Really Happened: As if anyone could forget the ongoing legal trouble surrounding the Trump administration, this week saw a sentencing hearing for one of the president's former advisors—in this case, former National Security Advisor Michael Flynn. If it seems like it was just last week that one of Trump's former advisors had a sentencing hearing, that's because it was. But like the seasoned pro he is, the president was eager to get out in front of the story.

Still, it's just a sentencing. How exciting or surprising could that be, unless you’re Michael Cohen making statements about being free once you get three years in jail? Turns out, the answer was "very surprising."

These would be the circumstances alleged by Flynn’s lawyers that he was, essentially, hoodwinked into confessing because no one at the FBI told him that lying to the FBI was a crime. Things only continued from there.

Well, yeah; that sounds pretty wild, especially the whole not hiding disgust thing. But that was just the start.

So, that was a surreal event. Who saw an abrupt postponement coming? Definitely not Flynn’s attorneys, who were judged to have badly miscalculated by the media. But, at least it ended well, at least in regards to the irony of the whole thing.

Roll on, March, I guess?

The Takeaway: When it comes to the surreal developments in a legal case like this, there’s a sensible response and a non-boring response. Guess which one this is.

Paul Ryan's Retirement Party

What Happened: Paul Ryan is just days away from retiring as Speaker of the House, so clearly it's time for a farewell tour that perhaps doesn't get the response he'd like.

What Really Happened: We're not saying that some politicians have an exaggerated sense of their own importance, but outgoing Speaker of the House Paul Ryan had a "farewell address" at the Library of Congress last week, and the invitation looked like this:

Actually, never mind the invitations, the actual speech didn't look too much better—

—but let's not think about the optics. Let's focus on the substance, shall we? Ryan complained about the "broken politics" of Washington, while congratulating himself on a tax bill that hurts the poor. So, you know, pretty much what you might expect, all things considered.

Let’s just say that not everyone was impressed with Ryan's speech—or, for that matter, his legacy as a political figure. Headlines like "Good Riddance, Paul Ryan," "So Long, Paul Ryan, You Won’t Be Missed," "Paul Ryan Is the Biggest Fake I've Ever Seen in Politics," and "Paul Ryan Was a Villain and No One Will Miss Him"—all of which are actually real, and from a 24-hour period, amazingly—might give that away.

In fact, we'd go so far as to say that some were particularly unimpressed.

So, uh, happy retirement…? (We'll always have your creepy workout photoshoot, Paul. Nothing will ever take that away from you. Sadly.)

The Takeaway: Meanwhile, the woman who is likely to replace Ryan had perhaps the greatest response to the entire thing.

Shaft the Messenger

What Happened: You weren't being paranoid after all; someone else really was able to get access to all your messages on Facebook. Doesn't that make you feel better?

What Really Happened: In case you thought that things couldn't get much worse for Facebook considering its recent public relation woes, guess what: It could get much worse. Take it away, New York Times.

Yes, you read that right, as unbelievable as it may sound.

Not enough yikes for you just yet? Oh, just keep going, because it gets worse.

Many people were wondering what the solution was. A recurring theme kept popping up.

Meanwhile, the media took a different, and far less surprising, tack, with everyone talking about deleting Facebook a lot.

How serious was this as a threat? Well, Facebook released two different responses to try and clear up rumors … by pretty much confirming the reporting. That's almost a start, kind of?

The Takeaway: On the plus side, at least this was the only PR disaster for Facebook this week related to other people having access to private information on the platform.

The Shutdown Looms

What Happened: It's been teased throughout 2018, but as the year draws to a close, perhaps the US has finally reached the point where the government is going to shut down. Just in time!

What Really Happened: The US government has been wavering around a shutdown for some time now. There have been short-term fixes and last-minute deals for months in an attempt to ensure that there isn't what Rep. Nancy Pelosi memorably called a Trump Shutdown. Last week, for example, with just days to go before funding ran out, there was a move towards one more before-the-buzzer save—not that anyone seemed to think it would work.

Funny story; it never even got a chance to fail in the Senate.

Yes, it’s Paul Ryan again, a day after bemoaning "broken politics," helping politics be that little bit more broken.

So … maybe the shutdown is back on?

Well, perhaps not…

President Trump, at least, spent Friday morning doing what he could. Which is to say, he tweeted about the subject a lot.

People were not incredibly impressed.

At the time of this writing, it's not been voted on by the Senate. But here's a funny story: the president is refusing to sign a bill that doesn't fund the border wall that was, originally, going to be paid for by Mexico (hey, remember those days?), but … what if there was an alternative? What if someone else wanted to pay for the wall so that the government could stay open?

Well, that seems entirely legit.

It's surely a sign of 2018 that it's actually impossible to reject this plan entirely out of hand. Maybe we should just run a GoFundMe to keep the government open? Oh, no, wait; that's called paying taxes.

The Takeaway: Assuming that we are almost certainly going to have a shutdown for the holidays—everyone's favorite gift—let's just take a moment to appreciate what's happening, shall we?

See you all in 2019!

Huntington’s disease is brutal in its simplicity. The disorder, which slowly bulldozes your ability to control your body, starts with just a single mutation, in the gene for huntingtin protein. That tweak tacks an unwelcome glob of glutamines—extra amino acids—onto the protein, turning it into a destroyer that attacks neurons.

Huntington’s simplicity is exciting, because theoretically, it means you could treat it with a single drug targeted at that errant protein. But in the 24 years since scientists discovered it the gene for huntingtin, the search for suitable drugs has come up empty. This century’s riches of genetic and chemical data seem like it should have sped up research, but so far, the drug pipeline is more faucet than fire hydrant.

Part of the problem is simply that drug design is hard. But many researchers point to the systems of paywalls and patents that lock up data, slowing the flow of information. So a nonprofit called the Structural Genomics Consortium is countering with a strategy of extreme openness. They’re partnering with nine pharmaceutical companies and labs at six universities, including Oxford, the University of Toronto, and UNC Chapel Hill. They’re pledging to share everything with each other—drug wish lists, results in open access journals, and experimental samples—hoping to speed up the long, expensive drug design process for tough diseases like Huntington’s.

Rachel Harding, a postdoc at the University of Toronto arm of the collaboration, joined up to study the Huntington’s protein after she finished her PhD at Oxford. In a recent round of experiments, her lab grew insect cells in stacks of lab flasks fed with pink media. After slipping the cells a DNA vector that directed them to produce huntingtin, Rachel purified and stabilized the protein—and once it hangs out in a deep freezer for a while, she’ll map it with an electron microscope at Oxford.

Harding’s approach deviates from the norm in one major way: She doesn’t wait to publish a paper before sharing her results. After each of her experiments, “we’ll just put that into the public domain so that more people can use our stuff for free,” she says: protocols, the genetic sequences that worked for making proteins, experimental data. She’d even like to share protein samples with interested researchers, as she’s offered on Twitter. All this work is to create a map of huntingtin, “how all the atoms are connected to each other in three-dimensional space,” Harding says, including potential binding sites for drugs.

The next step is to ping that protein structure with thousands of molecules–chemical probes–to see if any bind in a helpful way. That’s what Kilian Huber, a medicinal chemistry researcher at Oxford University’s arm of the Structural Genomics Consortium, spends his days working on. Given a certain protein, he develops a way to measure its activity in cells, and then tests it against chemicals from pharmaceutical companies’ compound libraries, full of thousands of potential drug molecules.

If they score a hit, Huber and his consortium collaborators have pledged not to patent any of these chemicals. To the contrary, they want to share any chemical probe that works so it can quickly get more replication and testing. Many times, at other researchers’ requests, he has “put these compounds in an envelope, and sent them over,” he says. Recipient researchers generally cover shipping costs, and the organization as a whole has shipped off more than 10,000 samples since it started in 2004.

Under the umbrella of the SGC, about 200 scientists like Kilian and Rachel have agreed to never file any patents, and to publish only open access papers. CEO Aled Edwards beams when he talks about the group’s “metastatic openness.” Asking researchers to agree to share their work hasn’t been a problem. “There’s a willingness to be open,” he says, “you just have to show the way.”

Is Sharing Caring?

There are a few challenges to such a high degree of openness. The academic labs are involved in which projects they tackle first—but it’s their funders that ultimately decide which tricky proteins everyone will work on. Each government, pharmaceutical company, or nonprofit that gifts $8 million to the organization can nominate proteins to a master to-do list, which researchers at these companies and affiliate universities tackle together.

That list could be a risk for the pharma companies at the table: While it doesn’t specify which company nominated which protein, the entire group can see that somebody is interested in a Huntington’s strategy, for example. But they’re hedging their bets on a selective reveal of their priorities. For several million dollars—a fraction of most of these companies’ R&D budgets—companies including Pfizer, Novartis, and Bayer buy into the scientific expertise of this group and stand to get results a bit faster. And since no one is patenting any of the genes, protein structures, or experimental chemicals they produce, the companies can still file their own patents for whatever drugs they create as a result of this research.

That might seem like a bum deal for the scientists doing all the work of discovery. But mostly, scientists at the SGC seem thrilled that collaborating can accelerate their research.

    Related Stories

  • Sarah Zhang

    Why Pharma Wants to Put Sensors in This Blockbuster Drug

  • Daniela Hernandez

    Fixing a Broken Drug Business by Spreading the Wealth

  • Josh McHugh

    Drug Test Cowboys: The Secret World of Pharmaceutical Trial Subjects

“Rather than trying to do everything yourself, I can just share whatever I'm generating, and give it to the people that I think are experts in that area,” says Huber. “Then they will share the information back with us, and that, to me, is the key, from a personal point of view, on top of hopefully being able to support the development of new medicines,” says Huber. Because all the work is published open access, technically anyone in the world could benefit.

Edwards has pushed the SGC to slowly open up new steps of the drug discovery process. They started out working on genes, which is why they’re named a ‘genomics consortium’, then eked their way to sharing protein structures like the ones Harding works on. Creating and sharing tool compounds like Huber’s is their latest advance. “We’re trying to create a parallel universe where we can invent medicines in the open, where we can share our data,” Edwards says.

He hopes their approach will expand into a wider movement, so that other life science researchers get on board with data sharing, and open-source science improves repeatability and speeds up research findings. The Montreal Neurological Institute stopped filing patents on any of its discoveries last year. And there are other groups, like the Open Source Malaria Project, that have made a point of keeping all of their science in the open.

Sharing data won’t necessarily solve the inflating price of certain drugs. But it could certainly speed up understanding of new compounds, and shore up their chances of getting through clinical trials. The drug-making process is so complicated that if data sharing shaved just a bit of time off each step, it could save people years of waiting. The Huntington’s patients are waiting.

Related Video

Culture

Expired Medication: A Dose of Truth

Medicine has an expiration stamp—but Is it actually, you know, serious? Or are those sell-by dates just a Big Pharma racket? Mr. Know-It-All gives you a healthy dose of the truth.