Month: March 2019

Home / Month: March 2019

When someone takes their own life, they leave behind an inheritance of unanswered questions. “Why did they do it?” “Why didn’t we see this coming?” “Why didn’t I help them sooner?” If suicide were easy to diagnose from the outside, it wouldn’t be the public health curse it is today. In 2014 suicide rates surged to a 30-year high in the US, making it now the second leading cause of death among young adults. But what if you could get inside someone’s head, to see when dark thoughts might turn to action?

That’s what scientists are now attempting to do with the help of brain scans and artificial intelligence. In a study published today in Nature Human Behavior, researchers at Carnegie Mellon and the University of Pittsburgh analyzed how suicidal individuals think and feel differently about life and death, by looking at patterns of how their brains light up in an fMRI machine. Then they trained a machine learning algorithm to isolate those signals—a frontal lobe flare at the mention of the word “death,” for example. The computational classifier was able to pick out the suicidal ideators with more than 90 percent accuracy. Furthermore, it was able to distinguish people who had actually attempted self-harm from those who had only thought about it.

Thing is, fMRI studies like this suffer from some well-known shortcomings. The study had a small sample size—34 subjects—so while the algorithm might excel at spotting particular blobs in this set of brains, it’s not obvious it would work as well in a broader population. Another dilemma that bedevils fMRI studies: Just because two things occur at the same time doesn’t prove one causes the other. And then there’s the whole taint of tautology to worry about; scientists decide certain parts of the brain do certain things, then when they observe a hand-picked set of triggers lighting them up, boom, confirmation.

In today’s study, the researchers started with 17 young adults between the ages of 18 and 30 who had recently reported suicidal ideation to their therapists. Then they recruited 17 neurotypical control participants and put them each inside an fMRI scanner. While inside the tube, subjects saw a random series of 30 words. Ten were generally positive, 10 were generally negative, and 10 were specifically associated with death and suicide. Then researchers asked the subjects to think about each word for three seconds as it showed up on a screen in front of them. “What does ‘trouble’ mean for you?” “What about ‘carefree,’ what’s the key concept there?” For each word, the researchers recorded the subjects' cerebral blood flow to find out which parts of their brains seemed to be at work.

Then they took those brain scans and fed them to a machine learning classifier. For each word, they told the algorithm which scans belonged to the suicidal ideators and which belonged to the control group, leaving one person at random out of the training set. Once it got good at telling the two apart, they gave it the left-out person. They did this for all 30 words, each time excluding one test subject. At the end, the classifier could reliably look at a scan and say whether or not that person had thought about killing themselves 91 percent of the time. To see how well it could more generally parse people, they then turned it on 21 additional suicidal ideators, who had been excluded from the main analyses because their brain scans had been too messy. Using the six most discriminating concepts—death, cruelty, trouble, carefree, good, and praise—the classifier spotted the ones who’d thought about suicide 87 percent of the time.

“The fact that it still performed well with noisier data tells us that the model is more broadly generalizable,” says Marcel Just, a psychologist at Carnegie Mellon and lead author on the paper. But he says the approach needs more testing to determine if it could successfully monitor or predict future suicide attempts. Comparing groups of individuals with and without suicide risk isn’t the same thing as holding up a brain scan and assigning its owner a likelihood of going through with it.

But that’s where this is all headed. Right now, the only way doctors can know if a patient is thinking of harming themselves is if they report it to a therapist, and many don’t. In a study of people who committed suicide either in the hospital or immediately following discharge, nearly 80 percent denied thinking about it to the last mental healthcare professional they saw. So there is a real need for better predictive tools. And a real opportunity for AI to fill that void. But probably not with fMRI data.

    More on Mental Health

  • Megan Molteni

    Artificial Intelligence Is Learning to Predict and Prevent Suicide

  • Megan Molteni

    The Chatbot Therapist Will See You Now

  • Robbie Gonzalez

    Virtual Therapists Help Veterans Open Up About PTSD

It’s just not practical. The scans can cost a few thousand dollars, and insurers only cover them if there is a valid clinical reason to do so. That is, if a doctor thinks the only way to diagnose what’s wrong with you is to stick you in a giant magnet. While plenty of neuroscience papers make use of fMRI, in the clinic, the imaging procedure is reserved for very rare cases. Most hospitals aren’t equipped with the machinery, for that very reason. Which is why Just is planning to replicate the study—but with patients wearing electronic sensors on their head while they're in the tube. Electroencephalograms, or EEGs, are one hundredth the price of fMRI equipment. The idea is to tie predictive brain scan signals to corresponding EEG readouts, so that doctors can use the much cheaper test to identify high-risk patients.

Other scientists are already mining more accessible kinds of data to find telltale signatures of impending suicide. Researchers at Florida State and Vanderbilt recently trained a machine learning algorithm on 3,250 electronic medical records for people who had attempted suicide sometime in the last 20 years. It identifies people not by their brain activity patterns, but by things like age, sex, prescriptions, and medical history. And it correctly predicts future suicide attempts about 85 percent of the time.

“As a practicing doctor, none of those things on their own might pop out to me, but the computer can spot which combinations of features are predictive of suicide risk,” says Colin Walsh, an internist and clinical informatician at Vanderbilt who’s working to turn the algorithm he helped develop into a monitoring tool doctors and other healthcare professionals in Nashville can use to keep tabs on patients. “To actually get used it’s got to revolve around data that’s already routinely collected. No new tests. No new imaging studies. We’re looking at medical records because that’s where so much medical care is already delivered.”

And others are mining data even further upstream. Public health researchers are poring over Google searches for evidence of upticks in suicidal ideation. Facebook is scanning users’ wall posts and live videos for combinations of words that suggest a risk of self-harm. The VA is currently piloting an app that passively picks up vocal cues that can signal depression and mood swings. Verily is looking for similar biomarkers in smart watches and blood draws. The goal for all these efforts is to reach people where they are—on the internet and social media—instead of waiting for them to walk through a hospital door or hop in an fMRI tube.

Related Video

Technology

The Robot Will See You Now – AI and Health Care

Artificial intelligence is now detecting cancer and robots are doing nursing tasks. But are there risks to handing over elements of our health to machines, no matter how sophisticated?

You want the real windows into someone's soul? Look at their Reddit subscriptions. It's all there: their passions, their hobbies, their ideological leanings, their love of terrible haircuts and sublime anonymized cringe. And if they're anything like me, those subscriptions also tell the tale of a life spent diving down rabbit holes.

Origami. Board games. Trail running. Pens. Cycling. Mechanical keyboards. Scrabble. (I know. God, I know. There are jokes to be made here. Trust that I've already made them all myself.) Whenever my interest attaches itself to a new thing—which has happened my entire life, cyclically and all-encompassingly—I tend to develop a singular, insatiable appetite for information about that thing. Hey, you know what the internet is really good at? Enabling singular, insatiable appetites.

Especially since 2005. That's the year Reddit and YouTube launched within months of each other, and obsession became centralized. You had options before that, blogs and message boards and Usenet forums, but they weren't exactly magnets of cross-pollination. They didn't fully open the floodgates to minute details and the masses yearning to pore over them. Then, on opposite sides of the country, two different small groups of twentysomething dudes created twin engines of infatuation. Between their massive tents and their ease of use, Reddit and YouTube tore away the guardrail that had always stood between serial hobbyists and oblivion.

Related Stories

For all the hand-wringing about both sites—YouTube's gameable recommendation algorithm that can radicalize dummies at the drop of a meme, Reddit's chelonian foot speed when dealing with bad actors and hate speech in the more noisome subreddits—both are incredible resources for the participatory realm. Watching more experienced people do what you're trying to do, sharing setups and techniques, even getting support and commiseration from those who are similarly, rapturously afloat in the same thing you can't stop reading and thinking about: It's not just a recipe for intellectual indulgence, but for improvement as well. (On YouTube, that value comes from the creator; on Reddit, it comes from the comments. Swap the two at your own peril.)

Rabbit holes are what make Beauty YouTube such a colossus, why the Ask Science subreddit has 16 million subscribers. But they also hold a secret: The deeper you go, the tighter it gets. That's because a rabbit hole is a filter bubble of sorts, albeit one that's labeled as such and explicitly opted into—you're there because you're interested in this Thing, as is everyone else, and under such celebratory scrutiny that Thing distends, its perceived stature far outweighing its real-life impact. Just because there are a million opinions about something doesn't make it important to anyone outside the bubble, let alone crucial.

And before long, orthodoxy rears its head. Want to make coffee? Oh, you're going to need to spend hours dialing in the grind on your $1,000 Mazzer Mini E before pouring 205-degree water over it from your gooseneck kettle. Don't forget to account for the bloom! Want to get a new keyboard that feels better and looks nicer than your laptop's? Great, but Topre switches or GTFO. Oh, and don't stop at one. Or two. Or 17.

Don't get me wrong. I'm a collector. I love the right tool for the right job, and I love research even more. (I'm really fucking weird about my pens.) But more than once I've become consumed by the idea that my experience with a Thing will be utterly transformed if I just treat myself to the right running vest. Or digital temperature regulator for an espresso machine. Or, yes, Scrabble-themed keycaps. That's not the joy of collecting; it's the expectation of fulfillment. I watch video reviews, or read people waxing rhapsodic, and it changes my Thing from a learning process, an intrinsic enjoyment, to a preamble. There's an "endgame"; there are "grails." Get the grail, and you're in the endgame.

But there's no endgame, and there's no grail. There's no bottom to the rabbit hole.

What there is is learning more about a thing you like to do, and maybe getting better at it. Running longer. Enjoying the feel of your pen on paper. Playing a game with friends. Everything else is just a commercial. So jump into all the rabbit holes you want—just don't expect to find Wonderland.


How We Learn: Read More

Related Video

Culture

Inside the YouTube-Fueled, Teenage Extravaganza That Is Beautycon

A look at the industry-shaking in real life meet up of beauty world influencers, their fans and the brands that compete for their attention.

Scientists have been using quantum theory for almost a century now, but embarrassingly they still don’t know what it means. An informal poll taken at a 2011 conference on Quantum Physics and the Nature of Reality showed that there’s still no consensus on what quantum theory says about reality—the participants remained deeply divided about how the theory should be interpreted.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Some physicists just shrug and say we have to live with the fact that quantum mechanics is weird. So particles can be in two places at once, or communicate instantaneously over vast distances? Get over it. After all, the theory works fine. If you want to calculate what experiments will reveal about subatomic particles, atoms, molecules and light, then quantum mechanics succeeds brilliantly.

But some researchers want to dig deeper. They want to know why quantum mechanics has the form it does, and they are engaged in an ambitious program to find out. It is called quantum reconstruction, and it amounts to trying to rebuild the theory from scratch based on a few simple principles.

If these efforts succeed, it’s possible that all the apparent oddness and confusion of quantum mechanics will melt away, and we will finally grasp what the theory has been trying to tell us. “For me, the ultimate goal is to prove that quantum theory is the only theory where our imperfect experiences allow us to build an ideal picture of the world,” said Giulio Chiribella, a theoretical physicist at the University of Hong Kong.

There’s no guarantee of success—no assurance that quantum mechanics really does have something plain and simple at its heart, rather than the abstruse collection of mathematical concepts used today. But even if quantum reconstruction efforts don’t pan out, they might point the way to an equally tantalizing goal: getting beyond quantum mechanics itself to a still deeper theory. “I think it might help us move towards a theory of quantum gravity,” said Lucien Hardy, a theoretical physicist at the Perimeter Institute for Theoretical Physics in Waterloo, Canada.

The Flimsy Foundations of Quantum Mechanics

The basic premise of the quantum reconstruction game is summed up by the joke about the driver who, lost in rural Ireland, asks a passer-by how to get to Dublin. “I wouldn’t start from here,” comes the reply.

Where, in quantum mechanics, is “here”? The theory arose out of attempts to understand how atoms and molecules interact with light and other radiation, phenomena that classical physics couldn’t explain. Quantum theory was empirically motivated, and its rules were simply ones that seemed to fit what was observed. It uses mathematical formulas that, while tried and trusted, were essentially pulled out of a hat by the pioneers of the theory in the early 20th century.

Take Erwin Schrödinger’s equation for calculating the probabilistic properties of quantum particles. The particle is described by a “wave function” that encodes all we can know about it. It’s basically a wavelike mathematical expression, reflecting the well-known fact that quantum particles can sometimes seem to behave like waves. Want to know the probability that the particle will be observed in a particular place? Just calculate the square of the wave function (or, to be exact, a slightly more complicated mathematical term), and from that you can deduce how likely you are to detect the particle there. The probability of measuring some of its other observable properties can be found by, crudely speaking, applying a mathematical function called an operator to the wave function.

But this so-called rule for calculating probabilities was really just an intuitive guess by the German physicist Max Born. So was Schrödinger’s equation itself. Neither was supported by rigorous derivation. Quantum mechanics seems largely built of arbitrary rules like this, some of them—such as the mathematical properties of operators that correspond to observable properties of the system—rather arcane. It’s a complex framework, but it’s also an ad hoc patchwork, lacking any obvious physical interpretation or justification.

Compare this with the ground rules, or axioms, of Einstein’s theory of special relativity, which was as revolutionary in its way as quantum mechanics. (Einstein launched them both, rather miraculously, in 1905.) Before Einstein, there was an untidy collection of equations to describe how light behaves from the point of view of a moving observer. Einstein dispelled the mathematical fog with two simple and intuitive principles: that the speed of light is constant, and that the laws of physics are the same for two observers moving at constant speed relative to one another. Grant these basic principles, and the rest of the theory follows. Not only are the axioms simple, but we can see at once what they mean in physical terms.

What are the analogous statements for quantum mechanics? The eminent physicist John Wheeler once asserted that if we really understood the central point of quantum theory, we would be able to state it in one simple sentence that anyone could understand. If such a statement exists, some quantum reconstructionists suspect that we’ll find it only by rebuilding quantum theory from scratch: by tearing up the work of Bohr, Heisenberg and Schrödinger and starting again.

Quantum Roulette

One of the first efforts at quantum reconstruction was made in 2001 by Hardy, then at the University of Oxford. He ignored everything that we typically associate with quantum mechanics, such as quantum jumps, wave-particle duality and uncertainty. Instead, Hardy focused on probability: specifically, the probabilities that relate the possible states of a system with the chance of observing each state in a measurement. Hardy found that these bare bones were enough to get all that familiar quantum stuff back again.

Hardy assumed that any system can be described by some list of properties and their possible values. For example, in the case of a tossed coin, the salient values might be whether it comes up heads or tails. Then he considered the possibilities for measuring those values definitively in a single observation. You might think any distinct state of any system can always be reliably distinguished (at least in principle) by a measurement or observation. And that’s true for objects in classical physics.

In quantum mechanics, however, a particle can exist not just in distinct states, like the heads and tails of a coin, but in a so-called superposition—roughly speaking, a combination of those states. In other words, a quantum bit, or qubit, can be not just in the binary state of 0 or 1, but in a superposition of the two.

But if you make a measurement of that qubit, you’ll only ever get a result of 1 or 0. That is the mystery of quantum mechanics, often referred to as the collapse of the wave function: Measurements elicit only one of the possible outcomes. To put it another way, a quantum object commonly has more options for measurements encoded in the wave function than can be seen in practice.

Hardy’s rules governing possible states and their relationship to measurement outcomes acknowledged this property of quantum bits. In essence the rules were (probabilistic) ones about how systems can carry information and how they can be combined and interconverted.

Hardy then showed that the simplest possible theory to describe such systems is quantum mechanics, with all its characteristic phenomena such as wavelike interference and entanglement, in which the properties of different objects become interdependent. “Hardy’s 2001 paper was the ‘Yes, we can!’ moment of the reconstruction program,” Chiribella said. “It told us that in some way or another we can get to a reconstruction of quantum theory.”

More specifically, it implied that the core trait of quantum theory is that it is inherently probabilistic. “Quantum theory can be seen as a generalized probability theory, an abstract thing that can be studied detached from its application to physics,” Chiribella said. This approach doesn’t address any underlying physics at all, but just considers how outputs are related to inputs: what we can measure given how a state is prepared (a so-called operational perspective). “What the physical system is is not specified and plays no role in the results,” Chiribella said. These generalized probability theories are “pure syntax,” he added — they relate states and measurements, just as linguistic syntax relates categories of words, without regard to what the words mean. In other words, Chiribella explained, generalized probability theories “are the syntax of physical theories, once we strip them of the semantics.”

The general idea for all approaches in quantum reconstruction, then, is to start by listing the probabilities that a user of the theory assigns to each of the possible outcomes of all the measurements the user can perform on a system. That list is the “state of the system.” The only other ingredients are the ways in which states can be transformed into one another, and the probability of the outputs given certain inputs. This operational approach to reconstruction “doesn’t assume space-time or causality or anything, only a distinction between these two types of data,” said Alexei Grinbaum, a philosopher of physics at the CEA Saclay in France.

To distinguish quantum theory from a generalized probability theory, you need specific kinds of constraints on the probabilities and possible outcomes of measurement. But those constraints aren’t unique. So lots of possible theories of probability look quantum-like. How then do you pick out the right one?

“We can look for probabilistic theories that are similar to quantum theory but differ in specific aspects,” said Matthias Kleinmann, a theoretical physicist at the University of the Basque Country in Bilbao, Spain. If you can then find postulates that select quantum mechanics specifically, he explained, you can “drop or weaken some of them and work out mathematically what other theories appear as solutions.” Such exploration of what lies beyond quantum mechanics is not just academic doodling, for it’s possible—indeed, likely—that quantum mechanics is itself just an approximation of a deeper theory. That theory might emerge, as quantum theory did from classical physics, from violations in quantum theory that appear if we push it hard enough.

Bits and Pieces

Some researchers suspect that ultimately the axioms of a quantum reconstruction will be about information: what can and can’t be done with it. One such derivation of quantum theory based on axioms about information was proposed in 2010 by Chiribella, then working at the Perimeter Institute, and his collaborators Giacomo Mauro D’Ariano and Paolo Perinotti of the University of Pavia in Italy. “Loosely speaking,” explained Jacques Pienaar, a theoretical physicist at the University of Vienna, “their principles state that information should be localized in space and time, that systems should be able to encode information about each other, and that every process should in principle be reversible, so that information is conserved.” (In irreversible processes, by contrast, information is typically lost—just as it is when you erase a file on your hard drive.)

What’s more, said Pienaar, these axioms can all be explained using ordinary language. “They all pertain directly to the elements of human experience, namely, what real experimenters ought to be able to do with the systems in their laboratories,” he said. “And they all seem quite reasonable, so that it is easy to accept their truth.” Chiribella and his colleagues showed that a system governed by these rules shows all the familiar quantum behaviors, such as superposition and entanglement.

One challenge is to decide what should be designated an axiom and what physicists should try to derive from the axioms. Take the quantum no-cloning rule, which is another of the principles that naturally arises from Chiribella’s reconstruction. One of the deep findings of modern quantum theory, this principle states that it is impossible to make a duplicate of an arbitrary, unknown quantum state.

It sounds like a technicality (albeit a highly inconvenient one for scientists and mathematicians seeking to design quantum computers). But in an effort in 2002 to derive quantum mechanics from rules about what is permitted with quantum information, Jeffrey Bub of the University of Maryland and his colleagues Rob Clifton of the University of Pittsburgh and Hans Halvorson of Princeton University made no-cloning one of three fundamental axioms. One of the others was a straightforward consequence of special relativity: You can’t transmit information between two objects more quickly than the speed of light by making a measurement on one of the objects. The third axiom was harder to state, but it also crops up as a constraint on quantum information technology. In essence, it limits how securely a bit of information can be exchanged without being tampered with: The rule is a prohibition on what is called “unconditionally secure bit commitment.”

These axioms seem to relate to the practicalities of managing quantum information. But if we consider them instead to be fundamental, and if we additionally assume that the algebra of quantum theory has a property called non-commutation, meaning that the order in which you do calculations matters (in contrast to the multiplication of two numbers, which can be done in any order), Clifton, Bub and Halvorson have shown that these rules too give rise to superposition, entanglement, uncertainty, nonlocality and so on: the core phenomena of quantum theory.

Another information-focused reconstruction was suggested in 2009 by Borivoje Dakić and Časlav Brukner, physicists at the University of Vienna. They proposed three “reasonable axioms” having to do with information capacity: that the most elementary component of all systems can carry no more than one bit of information, that the state of a composite system made up of subsystems is completely determined by measurements on its subsystems, and that you can convert any “pure” state to another and back again (like flipping a coin between heads and tails).

Dakić and Brukner showed that these assumptions lead inevitably to classical and quantum-style probability, and to no other kinds. What’s more, if you modify axiom three to say that states get converted continuously—little by little, rather than in one big jump—you get only quantum theory, not classical. (Yes, it really is that way round, contrary to what the “quantum jump” idea would have you expect—you can interconvert states of quantum spins by rotating their orientation smoothly, but you can’t gradually convert a classical heads to a tails.) “If we don’t have continuity, then we don’t have quantum theory,” Grinbaum said.

A further approach in the spirit of quantum reconstruction is called quantum Bayesianism, or QBism. Devised by Carlton Caves, Christopher Fuchs and Rüdiger Schack in the early 2000s, it takes the provocative position that the mathematical machinery of quantum mechanics has nothing to do with the way the world really is; rather, it is just the appropriate framework that lets us develop expectations and beliefs about the outcomes of our interventions. It takes its cue from the Bayesian approach to classical probability developed in the 18th century, in which probabilities stem from personal beliefs rather than observed frequencies. In QBism, quantum probabilities calculated by the Born rule don’t tell us what we’ll measure, but only what we should rationally expect to measure.

In this view, the world isn’t bound by rules—or at least, not by quantum rules. Indeed, there may be no fundamental laws governing the way particles interact; instead, laws emerge at the scale of our observations. This possibility was considered by John Wheeler, who dubbed the scenario Law Without Law. It would mean that “quantum theory is merely a tool to make comprehensible a lawless slicing-up of nature,” said Adán Cabello, a physicist at the University of Seville. Can we derive quantum theory from these premises alone?

“At first sight, it seems impossible,” Cabello admitted—the ingredients seem far too thin, not to mention arbitrary and alien to the usual assumptions of science. “But what if we manage to do it?” he asked. “Shouldn’t this shock anyone who thinks of quantum theory as an expression of properties of nature?”

Making Space for Gravity

In Hardy’s view, quantum reconstructions have been almost too successful, in one sense: Various sets of axioms all give rise to the basic structure of quantum mechanics. “We have these different sets of axioms, but when you look at them, you can see the connections between them,” he said. “They all seem reasonably good and are in a formal sense equivalent because they all give you quantum theory.” And that’s not quite what he’d hoped for. “When I started on this, what I wanted to see was two or so obvious, compelling axioms that would give you quantum theory and which no one would argue with.”

So how do we choose between the options available? “My suspicion now is that there is still a deeper level to go to in understanding quantum theory,” Hardy said. And he hopes that this deeper level will point beyond quantum theory, to the elusive goal of a quantum theory of gravity. “That’s the next step,” he said. Several researchers working on reconstructions now hope that its axiomatic approach will help us see how to pose quantum theory in a way that forges a connection with the modern theory of gravitation—Einstein’s general relativity.

Look at the Schrödinger equation and you will find no clues about how to take that step. But quantum reconstructions with an “informational” flavor speak about how information-carrying systems can affect one another, a framework of causation that hints at a link to the space-time picture of general relativity. Causation imposes chronological ordering: An effect can’t precede its cause. But Hardy suspects that the axioms we need to build quantum theory will be ones that embrace a lack of definite causal structure—no unique time-ordering of events—which he says is what we should expect when quantum theory is combined with general relativity. “I’d like to see axioms that are as causally neutral as possible, because they’d be better candidates as axioms that come from quantum gravity,” he said.

Hardy first suggested that quantum-gravitational systems might show indefinite causal structure in 2007. And in fact only quantum mechanics can display that. While working on quantum reconstructions, Chiribella was inspired to propose an experiment to create causal superpositions of quantum systems, in which there is no definite series of cause-and-effect events. This experiment has now been carried out by Philip Walther’s lab at the University of Vienna—and it might incidentally point to a way of making quantum computing more efficient.

“I find this a striking illustration of the usefulness of the reconstruction approach,” Chiribella said. “Capturing quantum theory with axioms is not just an intellectual exercise. We want the axioms to do something useful for us—to help us reason about quantum theory, invent new communication protocols and new algorithms for quantum computers, and to be a guide for the formulation of new physics.”

But can quantum reconstructions also help us understand the “meaning” of quantum mechanics? Hardy doubts that these efforts can resolve arguments about interpretation—whether we need many worlds or just one, for example. After all, precisely because the reconstructionist program is inherently “operational,” meaning that it focuses on the “user experience”—probabilities about what we measure—it may never speak about the “underlying reality” that creates those probabilities.

“When I went into this approach, I hoped it would help to resolve these interpretational problems,” Hardy admitted. “But I would say it hasn’t.” Cabello agrees. “One can argue that previous reconstructions failed to make quantum theory less puzzling or to explain where quantum theory comes from,” he said. “All of them seem to miss the mark for an ultimate understanding of the theory.” But he remains optimistic: “I still think that the right approach will dissolve the problems and we will understand the theory.”

Maybe, Hardy said, these challenges stem from the fact that the more fundamental description of reality is rooted in that still undiscovered theory of quantum gravity. “Perhaps when we finally get our hands on quantum gravity, the interpretation will suggest itself,” he said. “Or it might be worse!”

    More Quanta

  • Megan Molteni

    Harvey Evacuees Leave Their Belongings—and Health Records—Behind

  • Natalie Wolchover

    The Man Who's Trying to Kill Dark Matter

  • Frank Wilczek

    Your Simple (Yes, Simple) Guide to Quantum Entanglement

Right now, quantum reconstruction has few adherents—which pleases Hardy, as it means that it’s still a relatively tranquil field. But if it makes serious inroads into quantum gravity, that will surely change. In the 2011 poll, about a quarter of the respondents felt that quantum reconstructions will lead to a new, deeper theory. A one-in-four chance certainly seems worth a shot.

Grinbaum thinks that the task of building the whole of quantum theory from scratch with a handful of axioms may ultimately be unsuccessful. “I’m now very pessimistic about complete reconstructions,” he said. But, he suggested, why not try to do it piece by piece instead—to just reconstruct particular aspects, such as nonlocality or causality? “Why would one try to reconstruct the entire edifice of quantum theory if we know that it’s made of different bricks?” he asked. “Reconstruct the bricks first. Maybe remove some and look at what kind of new theory may emerge.”

“I think quantum theory as we know it will not stand,” Grinbaum said. “Which of its feet of clay will break first is what reconstructions are trying to explore.” He thinks that, as this daunting task proceeds, some of the most vexing and vague issues in standard quantum theory—such as the process of measurement and the role of the observer—will disappear, and we’ll see that the real challenges are elsewhere. “What is needed is new mathematics that will render these notions scientific,” he said. Then, perhaps, we’ll understand what we’ve been arguing about for so long.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Related Video

Business

What the What Is Quantum Computing? We've Got You Covered

Thanks to the superposition principle, a quantum machine has the potential to become an exponentially more powerful computer. If that makes little sense to you, here's quantum computing explained.

Tech companies are eyeing the next frontier: the human face. Should you desire, you can now superimpose any variety of animal snouts onto a video of yourself in real time. If you choose to hemorrhage money on the new iPhone X, you can unlock your smartphone with a glance. At a KFC location in Hangzhou, China, you can even pay for a chicken sandwich by smiling at a camera. And at least one in four police departments in the US have access to facial recognition software to help them identify suspects.

But the tech isn’t perfect. Your iPhone X might not always unlock; a cop might arrest the wrong person. In order for software to always recognize your face as you, an entire sequence of algorithms has to work. First, the software has to be able to determine whether an image has a face in it at all. If you’re a cop trying to find a missing kid in a photo of a crowd, you might want the software to sort the faces by age. And ultimately, you need an algorithm that can compare each face with another photo in a database, perhaps with different lighting and at a different angle, and determine whether they’re the same person.

To improve these algorithms, researchers have found themselves using the tools of pollsters and social scientists: demographics. When they teach face recognition software about race, gender, and age, it can often perform certain tasks better. “This is not a surprising result,” says biometrics researcher Anil Jain of Michigan State University, “that if you model subpopulations separately you’ll get better results.” With better algorithms, maybe that cop won’t arrest the wrong person. Great news for everybody, right?

It’s not so simple. Demographic data may contribute to algorithms’ accuracy, but it also complicates their use.

Take a recent example. Researchers based at the University of Surrey in the UK and Jiangnan University in China were trying to improve an algorithm used in specific facial recognition applications. The algorithm, based on something called a 3-D morphable model, digitally converts a selfie into a 3-D head in less than a second. Model in hand, you can use it rotate the angle of someone’s selfie, for example, to compare it to another photograph. The iPhone X and Snapchat use similar 3-D models.

The researchers gave their algorithm some basic instructions: Here’s a template of a head, and here’s the ability to stretch or compress it to get the 2-D image to drape over it as smoothly as possible. The template they used is essentially the average human face—average nose length, average pupil distance, average cheek diameter, calculated from 3-D scans they took of real people. When people made these models in the past, it was hard to collect a lot of scans because they’re time-consuming. So frequently, they’d just lump all their data together and calculate an average face, regardless of race, gender, or age.

The group used a database of 942 faces—3-D scans collected in the UK and in China—to make their template. But instead of calculating the average of all 942 faces at once, they categorized the face data by race. They made separate templates for each race—an average Asian face, white face, and black face, and based their algorithm on these three templates. And even though they had only 10 scans of black faces—they had 100 white faces and over 800 Asian faces—they found that their algorithm generated a 3-D model that matched a real person’s head better than the previous one-template model.

“It’s not only for race,” says computer scientist Zhenhua Feng of the University of Surrey. “If you have a model for an infant, you can construct an infant’s 3-D face better. If you have a model for an old person, you can construct that type of 3-D face better.” So if you teach biometric software explicitly about social categories, it does a better job.

Feng’s particular 3-D models are a niche algorithm in facial recognition, says Jain—the trendy algorithms right now use 2-D photos because 3-D face data is hard to work with. But other more widespread techniques also lump people into categories to improve their performance. A more common 3-D face model, known as a person-specific model, also often uses face templates. Depending on whether the person in the picture is a man, woman, infant, or an elderly person, the algorithm will start with a different template. For specific 2-D machine learning algorithms that verify that two photographs contain the same person, researchers have demonstrated that if you break down different appearance attributes—gender, race, but also eye color, expression—it will also perform more accurately.

    More on Bias in AI

  • Scott Rosenberg

    Why AI Is Still Waiting For Its Ethics Transplant

  • Sophia Chen

    AI Research Is in Desperate Need of an Ethical Watchdog

  • Megan Garcia

    How to Keep Your AI From Turning Into a Racist Monster

So if you teach an algorithm about race, does that make it racist? Not necessarily, says sociologist Alondra Nelson of Columbia University, who studies the ethics of new technologies. Social scientists categorize data using demographic information all the time, in response to how society has already structured itself. For example, sociologists often analyze behaviors along gender or racial lines. “We live in a world that uses race for everything,” says Nelson. “I don’t understand the argument that we’re not supposed to here.” Existing databases—the FBI’s face depository, and the census—already stick people in predetermined boxes, so if you want an algorithm to work with these databases, you’ll have to use those categories.

However, Nelson points out, it’s important that computer scientists think through why they’ve chosen to use race over other categories. It’s possible that other variables with less potential for discrimination or bias would be just as effective.“Would it be OK to pick categories like, blue eyes, brown eyes, thin nose, not thin nose, or whatever—and not have it to do with race at all?” says Nelson.

Researchers need to imagine the possible applications of their work, particularly the ones that governments or institutions of power might use, says Nelson. Last year, the FBI released surveillance footage they took to monitor Black Lives Matter protests in Baltimore—whose state police department has been using facial recognition software since 2011. “As this work gets more technically complicated, it falls on researchers not just to do the technical work, but the ethical work as well,” Nelson says. In other words, the software in Snapchat—how could the cops use it?

Related Video

Business

Robots & Us: A Brief History of Our Robotic Future

Artificial intelligence and automation stand to upend nearly every aspect of modern life, from transportation to health care and even work. So how did we get here and where are we going?

A New Captain Marvel Trailer Is Coming Tonight

March 20, 2019 | Story | No Comments

It's time once again to turn on The Monitor, WIRED's roundup of the latest in the world of culture, from box-office news to announcements about hot new trailers. In today's installment: Captain Marvel readies for lift-off; Stephen King signs up for HBO; and Marvel breaks new ground.

She Is the Captain Now

Marvel will debut the next and perhaps final, full trailer for Captain Marvel tonight during ESPN's Monday Night Football game between the San Junipero Jawas and the Trouble City Tribbles (those are actual sports teams, right?) The movie, which stars Brie Larson as the titular good-doer, arrives next year. Watch for the trailer on WIRED later today. And speaking of all things Marvel…

'Master' Plan

…the studio has announced a big-screen stand-alone film following Shang-Chi, the Asian-American superhero (and occasional Avenger) who was introduced in the 1970s, and hailed as "The Master of Kung Fu." The Shang-Chi script will be written by Dave Callaham, who wrote next year's Wonder Woman 1984, and is basically working on every movie you'll be watching in the next two years. No release date or plot details for Shang-Chi are known yet, but Marvel is reportedly fast-tracking the film so expect more updates soon.

Because the Internet

It wasn't quite a slaughter race at the box office last weekend, with Disney's Ralph Breaks the Internet easily topping the chart once again, earning more than $25 million. The hit animated film was followed by such weeks-old hits as The Grinch, Creed II, Fantastic Beasts: The Crimes of Grindelwald, and Bohemian Rhapsody, the latter of which has now made half a billion worldwide. But the perch wasn't Ralph's only weekend victory: It was also nominated in the Best Animated Feature category for the year's Annie Awards, alongside such films as Isle of Dogs and Spider-Man: Into the Spider-Verse.

King's Things

HBO is turning Stephen King's recent horror-procedural hit The Outsider into a series. The author's 7,863rd bestseller—about a Midwest murder investigation that bleeds into the realm of the supernatural—is being overseen for the small screen by Jason Bateman, who will direct two episodes and produce. Emmy winner (and WIRED favorite) Ben Mendelsohn will star, adding to his roster of dark-hearted tales, which includes everything from Animal Kingdom to Netflix's Bloodline to Rogue One: A Star Wars Story, in which he stared down a deadly Darth Vader pun.

This story originally appeared on CityLab and is part of the Climate Desk collaboration.

If you want an unusual but punchy telling of the world’s explosion of climate-warping gases, look no further than this visualization of CO2 levels over the past centuries soaring like skyscrapers into space.

2A Brief History of CO2 Emissions” portrays the cumulative amount of this common greenhouse gas that humans have produced since the mid-1700s. It also projects to the end of the 21st century to show what might happen if the world disregards the Paris Agreement, an ambitious effort to limit warming that 200 countries signed onto in 2015. (President Donald Trump still wants to renege on it.) At this point, the CO2-plagued atmosphere could see jumps in average temperature as high as 6 to 9 degrees Fahrenheit, the animation’s narrator warns, displaying a model of Earth looking less like planet than porcupine.

“We wanted to show where and when CO2 was emitted in the last 250 years—and might be emitted in the coming 80 years if no climate action is taken,” emails Boris Mueller, a creator of the viz along with designer Julian Braun and others at Germany’s University of Applied Sciences Potsdam and the Potsdam Institute for Climate Impact Research. “By visualizing the global distribution and the local amount of cumulated CO2, we were able to create a strong image that demonstrates very clearly the dominant CO2-emitting regions and time spans.”

The visualization begins with a small, white lump growing on London around 1760—the start of the Industrial Revolution. More white dots quickly appear throughout Europe, rising prominently in Paris and Brussels in the mid-1800s, then throughout Asia and the US, where in the early 1900s emissions skyrocket in the New York region, Chicago, and Southern California.

By the time the present day rolls around, the world looks home to the biggest construction project in existence, with spires that’d put the Burj Khalifa to shame ascending in the US, China, and Europe—currently the worst emitters in terms of volume of CO2.

For this project, the team pulled historical data from the US Department of Energy-affiliated Carbon Dioxide Information Analysis Center. The “CO2 emission estimates are deduced from information on the amount and location of fossil-fuel combustion and cement production over time,” says Elmar Kriegler, the viz’s scientific lead. “Therefore, the visualization also tells the history of the Industrial Revolution which started in England, spread across Europe and the United States, and finally across the world.”

Astute observers will notice a couple of troubling things, such as the huge amount of emissions pouring out of urban areas like London, New York, and Tokyo. Cities and the power plants that keep them humming remain the world’s largest source of anthropogenic greenhouse gases. Also notable: the relative absence of emissions in some parts of the planet. That isn’t necessarily a good thing. “Some regions, in particular Africa, still do not show a significant cumulative CO2-emissions signal,” says Kriegler, “highlighting that they are still in the beginning of industrialization and may increase their emissions rapidly in the future, if they follow the path of Europe, the U.S., Japan, and recently China and Southeast Asia.”

How likely is it the worst-case scenario portrayed in this viz is nearing our doorstep? The viz’s creators argue that some current damage is here to stay. But they have some cause for optimism, too. “Reducing CO2 emissions to zero in the second half of the century can be achieved with decisive, global-scale emissions-reductions policies and efforts,” Kriegler says. “The Paris Agreement can be an important [catalyst] for this development if embraced fully by the world’s leading emitters and powers. But as we say in the movie, the time to act is now.”

Related Video

Science

How Climate Change Is Already Affecting Earth

Though the planet has only warmed by one-degree Celsius since the Industrial Revolution, climate change's effect on earth has been anything but subtle. Here are some of the most astonishing developments over the past few years.

Welp, 2018 is going out with a bang. In the last week, America got a reminder that Russia hacked the 2016 US election by hijacking social media; acting attorney general Matthew Whitaker rejected legal advice to recuse himself from overseeing Special Counsel Robert Mueller's probe; drones attacked British airports; and California dealt with potential UFOs. Actually, considering how the rest of the year has gone, that's not much of a bang at all—just a standard week in 2018. But what else are people talking about on this wreck that is the internet? Let's find out, shall we?

Trump's Big Move

What Happened: President Trump announced the US would be pulling troops out of Syria, leading to some instability, to say the least.

What Really Happened: Trump's surprise holiday gift to the Middle East arrived early Wednesday, as reports surfaced suggesting that the United States was about to withdraw troops from Syria. Those reports were soon confirmed via Twitter, because of course.

No, wait; I mean these tweets—but please remember that Trump announced that the US has defeated ISIS all the same.

It was, to put things mildly, not a popular decision, even within Trump's own, traditionally kowtowing-no-matter-what party.

The decision came as a surprise to many, with a lot of people unsure how, exactly, the decision had been reached, especially considering the president’s own national security team was apparently against it. Others believed that he had a pretty good idea.

So, if his own defense secretary had no say, who exactly was consulted?

OK, sure; for any other administration, that would seem like a wild conspiracy theory. However, when you look at who benefits from this decision, you do start to wonder just a little

Funny thing about those actually arguing in favor of this move: the president doesn’t seem to be aware that it's happening, judging by his public statements.

Wait. They have to fight ISIS? Wasn't ISIS defeated, according to a tweet made by exactly the same person just a day before? Man, international politics moves so quickly these days.

The Takeaway: An unexpected casualty of the decision might point to larger problems with Trump's attitude towards geopolitics: Defense Secretary Jim Mattis resigned Thursday over the matter, penning a letter that makes his feelings on the matter clear.

The Incomplete Sentencing of Michael Flynn

What Happened: Just in case anyone forgot: There's still an investigation into potentially illegal activity surrounding the presidential campaign of the man currently in the White House, and it's continuing to bear strange, surreal fruit.

What Really Happened: As if anyone could forget the ongoing legal trouble surrounding the Trump administration, this week saw a sentencing hearing for one of the president's former advisors—in this case, former National Security Advisor Michael Flynn. If it seems like it was just last week that one of Trump's former advisors had a sentencing hearing, that's because it was. But like the seasoned pro he is, the president was eager to get out in front of the story.

Still, it's just a sentencing. How exciting or surprising could that be, unless you’re Michael Cohen making statements about being free once you get three years in jail? Turns out, the answer was "very surprising."

These would be the circumstances alleged by Flynn’s lawyers that he was, essentially, hoodwinked into confessing because no one at the FBI told him that lying to the FBI was a crime. Things only continued from there.

Well, yeah; that sounds pretty wild, especially the whole not hiding disgust thing. But that was just the start.

So, that was a surreal event. Who saw an abrupt postponement coming? Definitely not Flynn’s attorneys, who were judged to have badly miscalculated by the media. But, at least it ended well, at least in regards to the irony of the whole thing.

Roll on, March, I guess?

The Takeaway: When it comes to the surreal developments in a legal case like this, there’s a sensible response and a non-boring response. Guess which one this is.

Paul Ryan's Retirement Party

What Happened: Paul Ryan is just days away from retiring as Speaker of the House, so clearly it's time for a farewell tour that perhaps doesn't get the response he'd like.

What Really Happened: We're not saying that some politicians have an exaggerated sense of their own importance, but outgoing Speaker of the House Paul Ryan had a "farewell address" at the Library of Congress last week, and the invitation looked like this:

Actually, never mind the invitations, the actual speech didn't look too much better—

—but let's not think about the optics. Let's focus on the substance, shall we? Ryan complained about the "broken politics" of Washington, while congratulating himself on a tax bill that hurts the poor. So, you know, pretty much what you might expect, all things considered.

Let’s just say that not everyone was impressed with Ryan's speech—or, for that matter, his legacy as a political figure. Headlines like "Good Riddance, Paul Ryan," "So Long, Paul Ryan, You Won’t Be Missed," "Paul Ryan Is the Biggest Fake I've Ever Seen in Politics," and "Paul Ryan Was a Villain and No One Will Miss Him"—all of which are actually real, and from a 24-hour period, amazingly—might give that away.

In fact, we'd go so far as to say that some were particularly unimpressed.

So, uh, happy retirement…? (We'll always have your creepy workout photoshoot, Paul. Nothing will ever take that away from you. Sadly.)

The Takeaway: Meanwhile, the woman who is likely to replace Ryan had perhaps the greatest response to the entire thing.

Shaft the Messenger

What Happened: You weren't being paranoid after all; someone else really was able to get access to all your messages on Facebook. Doesn't that make you feel better?

What Really Happened: In case you thought that things couldn't get much worse for Facebook considering its recent public relation woes, guess what: It could get much worse. Take it away, New York Times.

Yes, you read that right, as unbelievable as it may sound.

Not enough yikes for you just yet? Oh, just keep going, because it gets worse.

Many people were wondering what the solution was. A recurring theme kept popping up.

Meanwhile, the media took a different, and far less surprising, tack, with everyone talking about deleting Facebook a lot.

How serious was this as a threat? Well, Facebook released two different responses to try and clear up rumors … by pretty much confirming the reporting. That's almost a start, kind of?

The Takeaway: On the plus side, at least this was the only PR disaster for Facebook this week related to other people having access to private information on the platform.

The Shutdown Looms

What Happened: It's been teased throughout 2018, but as the year draws to a close, perhaps the US has finally reached the point where the government is going to shut down. Just in time!

What Really Happened: The US government has been wavering around a shutdown for some time now. There have been short-term fixes and last-minute deals for months in an attempt to ensure that there isn't what Rep. Nancy Pelosi memorably called a Trump Shutdown. Last week, for example, with just days to go before funding ran out, there was a move towards one more before-the-buzzer save—not that anyone seemed to think it would work.

Funny story; it never even got a chance to fail in the Senate.

Yes, it’s Paul Ryan again, a day after bemoaning "broken politics," helping politics be that little bit more broken.

So … maybe the shutdown is back on?

Well, perhaps not…

President Trump, at least, spent Friday morning doing what he could. Which is to say, he tweeted about the subject a lot.

People were not incredibly impressed.

At the time of this writing, it's not been voted on by the Senate. But here's a funny story: the president is refusing to sign a bill that doesn't fund the border wall that was, originally, going to be paid for by Mexico (hey, remember those days?), but … what if there was an alternative? What if someone else wanted to pay for the wall so that the government could stay open?

Well, that seems entirely legit.

It's surely a sign of 2018 that it's actually impossible to reject this plan entirely out of hand. Maybe we should just run a GoFundMe to keep the government open? Oh, no, wait; that's called paying taxes.

The Takeaway: Assuming that we are almost certainly going to have a shutdown for the holidays—everyone's favorite gift—let's just take a moment to appreciate what's happening, shall we?

See you all in 2019!

Huntington’s disease is brutal in its simplicity. The disorder, which slowly bulldozes your ability to control your body, starts with just a single mutation, in the gene for huntingtin protein. That tweak tacks an unwelcome glob of glutamines—extra amino acids—onto the protein, turning it into a destroyer that attacks neurons.

Huntington’s simplicity is exciting, because theoretically, it means you could treat it with a single drug targeted at that errant protein. But in the 24 years since scientists discovered it the gene for huntingtin, the search for suitable drugs has come up empty. This century’s riches of genetic and chemical data seem like it should have sped up research, but so far, the drug pipeline is more faucet than fire hydrant.

Part of the problem is simply that drug design is hard. But many researchers point to the systems of paywalls and patents that lock up data, slowing the flow of information. So a nonprofit called the Structural Genomics Consortium is countering with a strategy of extreme openness. They’re partnering with nine pharmaceutical companies and labs at six universities, including Oxford, the University of Toronto, and UNC Chapel Hill. They’re pledging to share everything with each other—drug wish lists, results in open access journals, and experimental samples—hoping to speed up the long, expensive drug design process for tough diseases like Huntington’s.

Rachel Harding, a postdoc at the University of Toronto arm of the collaboration, joined up to study the Huntington’s protein after she finished her PhD at Oxford. In a recent round of experiments, her lab grew insect cells in stacks of lab flasks fed with pink media. After slipping the cells a DNA vector that directed them to produce huntingtin, Rachel purified and stabilized the protein—and once it hangs out in a deep freezer for a while, she’ll map it with an electron microscope at Oxford.

Harding’s approach deviates from the norm in one major way: She doesn’t wait to publish a paper before sharing her results. After each of her experiments, “we’ll just put that into the public domain so that more people can use our stuff for free,” she says: protocols, the genetic sequences that worked for making proteins, experimental data. She’d even like to share protein samples with interested researchers, as she’s offered on Twitter. All this work is to create a map of huntingtin, “how all the atoms are connected to each other in three-dimensional space,” Harding says, including potential binding sites for drugs.

The next step is to ping that protein structure with thousands of molecules–chemical probes–to see if any bind in a helpful way. That’s what Kilian Huber, a medicinal chemistry researcher at Oxford University’s arm of the Structural Genomics Consortium, spends his days working on. Given a certain protein, he develops a way to measure its activity in cells, and then tests it against chemicals from pharmaceutical companies’ compound libraries, full of thousands of potential drug molecules.

If they score a hit, Huber and his consortium collaborators have pledged not to patent any of these chemicals. To the contrary, they want to share any chemical probe that works so it can quickly get more replication and testing. Many times, at other researchers’ requests, he has “put these compounds in an envelope, and sent them over,” he says. Recipient researchers generally cover shipping costs, and the organization as a whole has shipped off more than 10,000 samples since it started in 2004.

Under the umbrella of the SGC, about 200 scientists like Kilian and Rachel have agreed to never file any patents, and to publish only open access papers. CEO Aled Edwards beams when he talks about the group’s “metastatic openness.” Asking researchers to agree to share their work hasn’t been a problem. “There’s a willingness to be open,” he says, “you just have to show the way.”

Is Sharing Caring?

There are a few challenges to such a high degree of openness. The academic labs are involved in which projects they tackle first—but it’s their funders that ultimately decide which tricky proteins everyone will work on. Each government, pharmaceutical company, or nonprofit that gifts $8 million to the organization can nominate proteins to a master to-do list, which researchers at these companies and affiliate universities tackle together.

That list could be a risk for the pharma companies at the table: While it doesn’t specify which company nominated which protein, the entire group can see that somebody is interested in a Huntington’s strategy, for example. But they’re hedging their bets on a selective reveal of their priorities. For several million dollars—a fraction of most of these companies’ R&D budgets—companies including Pfizer, Novartis, and Bayer buy into the scientific expertise of this group and stand to get results a bit faster. And since no one is patenting any of the genes, protein structures, or experimental chemicals they produce, the companies can still file their own patents for whatever drugs they create as a result of this research.

That might seem like a bum deal for the scientists doing all the work of discovery. But mostly, scientists at the SGC seem thrilled that collaborating can accelerate their research.

    Related Stories

  • Sarah Zhang

    Why Pharma Wants to Put Sensors in This Blockbuster Drug

  • Daniela Hernandez

    Fixing a Broken Drug Business by Spreading the Wealth

  • Josh McHugh

    Drug Test Cowboys: The Secret World of Pharmaceutical Trial Subjects

“Rather than trying to do everything yourself, I can just share whatever I'm generating, and give it to the people that I think are experts in that area,” says Huber. “Then they will share the information back with us, and that, to me, is the key, from a personal point of view, on top of hopefully being able to support the development of new medicines,” says Huber. Because all the work is published open access, technically anyone in the world could benefit.

Edwards has pushed the SGC to slowly open up new steps of the drug discovery process. They started out working on genes, which is why they’re named a ‘genomics consortium’, then eked their way to sharing protein structures like the ones Harding works on. Creating and sharing tool compounds like Huber’s is their latest advance. “We’re trying to create a parallel universe where we can invent medicines in the open, where we can share our data,” Edwards says.

He hopes their approach will expand into a wider movement, so that other life science researchers get on board with data sharing, and open-source science improves repeatability and speeds up research findings. The Montreal Neurological Institute stopped filing patents on any of its discoveries last year. And there are other groups, like the Open Source Malaria Project, that have made a point of keeping all of their science in the open.

Sharing data won’t necessarily solve the inflating price of certain drugs. But it could certainly speed up understanding of new compounds, and shore up their chances of getting through clinical trials. The drug-making process is so complicated that if data sharing shaved just a bit of time off each step, it could save people years of waiting. The Huntington’s patients are waiting.

Related Video

Culture

Expired Medication: A Dose of Truth

Medicine has an expiration stamp—but Is it actually, you know, serious? Or are those sell-by dates just a Big Pharma racket? Mr. Know-It-All gives you a healthy dose of the truth.

Waking up. Working out. Riding the bus. Music is an ever-present companion for many of us, and its impact is undeniable. You know music makes you move and triggers emotional responses, but how and why? What changes when you play music, rather than simply listen? In the latest episode of Tech Effects, we tried to find out. Our first stop was USC's Brain & Creativity Institute, where I headed into the fMRI to see how my brain responded to musical cues—and how my body did, too. (If you're someone who experiences frisson, that spine-tingling, hair-raising reaction to music, you know what I'm talking about.) We also talked to researchers who have studied how learning to play music can help kids become better problem-solvers, and to author Dan Levitin, who helped break down how the entire brain gets involved when you hear music.

From there, we dove into music's potential as a therapeutic tool—something Gabrielle Giffords can attest to. When the onetime congresswoman was shot in 2011, her brain injuries led to aphasia, a neurological condition that affects speech. Through the use of treatments that include melodic intonation therapy, music helped retrain her brain's pathways to access language again. "I compare it to being in traffic," says music therapist Maegan Morrow, who worked with Giffords. "Music is basically like [taking a] feeder road to the new destination."

But singing or playing something you know is different from composing on the fly. We also wanted to get to the bottom of improvisation and creativity, so we linked up with Xavier Dephrepaulezz—who you might know as two-time Grammy winner Fantastic Negrito. At UCSF, he went into an fMRI machine as well, though he brought a (plastic) keyboard so he could riff along and sing to a backing track. Neuroscientist Charles Limb, who studies musical creativity, helped take us through the results and explain why the prefrontal cortex shuts down during improvisation. "It's not just something that happens in clubs and jazz bars," he says. "It's actually maybe the most fundamental form of what it means to be human to come up with a new idea."

If you're interested in digging into the research from the experts in the video, here you go:

• Matthew Sachs’ research on music and frisson

• Assal Habibi, “Music training and child development: a review of recent findings from a longitudinal study.”

• Daniel Levitin’s research on music and the brain’s internal opioid system, and on music and stress

• Levitin's book, This is Your Brain on Music

• Charles Limb, “Your Brain on Improv” (TED Talk) and “Neural Substrates of Spontaneous Musical Performance: An fMRI Study of Jazz Improvisation”

• ABC News' report on Gabrielle Giffords' music therapy

Every week, two million people across the world will sit for hours, hooked up to a whirring, blinking, blood-cleaning dialysis machine. Their alternatives: Find a kidney transplant or die.

In the US, dialysis is a roughly 40-billion-dollar business keeping 468,000 people with end-stage renal disease alive. The process is far from perfect, but that hasn't hindered the industry's growth. That's thanks to a federally mandated Medicare entitlement that guarantees any American who needs dialysis—regardless of age or financial status—can get it, and get it paid for.

The legally enshrined coverage of dialysis has doubtlessly saved thousands of lives since its enactment 45 years ago, but the procedure’s history of special treatment has also stymied innovation. Today, the US government spends about 50 times more on private dialysis companies than it does on kidney disease research to improve treatments and find new cures. In this funding atmosphere, scientists have made slow progress to come up with something better than the dialysis machine-filled storefronts and strip malls that provide a vital service to so many of the country's sickest people.

Shuvo Roy, UC San Francisco

Now, after more than 20 years of work, one team of doctors and researchers is close to offering patients an implantable artificial kidney, a bionic device that uses the same technology that makes the chips that power your laptop and smartphone. Stacks of carefully designed silicon nanopore filters combine with live kidney cells grown in a bioreactor. The bundle is enclosed in a body-friendly box and connected to a patient’s circulatory system and bladder—no external tubing required.

The device would do more than detach dialysis patients—who experience much higher rates of fatigue, chronic pain, and depression than the average American—from a grueling treatment schedule. It would also address a critical shortfall of organs for transplant that continues despite a recent uptick in donations. For every person who received a kidney last year, 5 more on the waiting list didn’t. And 4,000 of them died.

There are still plenty of regulatory hurdles ahead—human testing is scheduled to begin early next year1—but this bioartificial kidney is already bringing hope to patients desperate to unhook for good.

Innovation, Interrupted

Kidneys are the body’s bookkeepers. They sort the good from the bad—a process crucial to maintaining a stable balance of bodily chemicals. But sometimes they stop working. Diabetes, high blood pressure, and some forms of cancers can all cause kidney damage and impair the organs' ability to function. Which is why doctors have long been on the lookout for ways to mimic their operations outside the body.

The first successful attempt at a human artificial kidney was a feat of Rube Goldberg-ian ingenuity, necessitated in large part by wartime austerity measures. In the spring of 1940, a young Dutch doctor named Willem Kolff decamped from his university post to wait out the Nazi occupation of the Netherlands in a rural hospital on the IJssel river. There he constructed an unwieldy contraption for treating people dying from kidney failure using some 50 yards of sausage casing, a rotating wooden drum, and a bath of saltwater. The semi-permeable casing filtered out small molecules of toxic kidney waste while keeping larger blood cells and other molecules intact. Kolff's apparatus enabled him to draw blood from his patients, push it through the 150 feet of submerged casings, and return it to them cleansed of deadly impurities.

In some ways, dialysis has advanced quite a bit since 1943. (Vaarwel, sausage casing, hello mass-produced cellulose tubing.) But its basic function has remained unchanged for more than 70 years.

Not because there aren’t plenty of things to improve on. Design and manufacturing flaws make dialysis much less efficient than a real kidney at taking bad stuff out of the body and keeping the good stuff in. Other biological functions it can’t duplicate at all. But any efforts to substantially upgrade (or, heaven forbid, supplant) the technology has been undercut by a political promise made four and a half decades ago with unforeseen economic repercussions.

In the 1960s, when dialysis started gaining traction among doctors treating chronic kidney failure, most patients couldn't afford its $30,000 price tag—and it wasn’t covered by insurance. This led to treatment rationing and the arrival of death panels to the American consciousness. In 1972, Richard Nixon signed a government mandate to pay for dialysis for anyone who needed it. At the time, the moral cost of failing to provide lifesaving care was deemed greater than the financial setback of doing so.

    Organs, Made to Order

  • Megan Molteni

    First Human-Pig Chimera Is a Step Toward Custom Organs

  • Nick Stockton

    Scientists 3-D Print Mouse Ovaries That Actually Make Babies

  • Megan Molteni

    Scientists Build a Menstrual Biochip That Does Everything But Bleed

  • Matt Simon

    The Robots Are Coming for Your Heart

But the government accountants, unable to see the country’s coming obesity epidemic and all its attendant health problems, greatly underestimated the future need of the nation. In the decades since, the number of patients requiring dialysis has increased fiftyfold. Today the federal government spends as much on treating kidney disease—nearly $31 billion per year—as it does on the entire annual budget for the National Institutes of Health. The NIH devotes $574 million of its funding to kidney disease research to improve therapies and discover cures. It represents just 1.7 percent of the annual total cost of care for the condition.

But Shuvo Roy, a professor in the School of Pharmacy at UC San Francisco, didn’t know any of this back in the late 1990s when he was studying how to apply his electrical engineering chops to medical devices. Fresh off his PhD and starting a new job at the Cleveland Clinic, Roy was a hammer looking for interesting problems to solve. Cardiology and neurosurgery seemed like exciting, well-funded places to do that. So he started working on cardiac ultrasound. But one day, a few months in, an internal medicine resident at nearby Case Western Reserve University named William Fissell came up to Roy and asked: “Have you ever thought about working on the kidney?”

Roy hadn’t. But the more Fissell told him about how stagnant the field of kidney research had been, how ripe dialysis was for a technological overhaul, the more interested he got. And as he familiarized himself with the machines and the engineering behind them, Roy began to realize the extent of dialysis' limitations—and the potential for innovation.

Limitations like the pore-size problem. Dialysis does a decent job cleansing blood of waste products, but it also filters out good stuff: salts, sugars, amino acids. Blame the polymer manufacturing process, which can’t replicate the 7-nanometer precision of nephrons—the kidney's natural filters. Making dialysis membranes involves a process called extrusion, which yields a distribution of pore sizes—most are about 7nm but you also get some portion that are much smaller, some that are much larger, and everything in between. This is a problem because that means some of the bad stuff (like urea and excess salts) can sneak through and some of the good stuff (necessary blood sugars and amino acids) gets trapped. Seven nanometers is the size of albumin—a critical protein that keeps fluid from leaking out of blood vessels, nourishes tissues, and transports hormones, vitamins, drugs, and substances like calcium throughout the body. Taking too much of it out of the bloodstream would be a bad thing. And when it comes to the kidney’s other natural functions, like secreting hormones that regulate blood pressure, dialysis can’t do them at all. Only living cells can.

“We were talking about making a better Bandaid,” Roy says. But as he and Fissell looked around them at the advances being made in live tissue engineering, they started thinking beyond a better, smaller, faster filter. “We thought, if people are growing ears on the backs of mice, why can’t we grow a kidney?”

It turned out, someone had already tried. Sort of.

Dialysis, Disrupted

Back in 1997 when Fissell and Roy were finishing up their advanced training at Case Western, a nephrologist named David Humes at the University of Michigan began working to isolate a particular kind of kidney cell found on the backend of the nephron. Humes figured out how to extract them from cadaver kidneys not suitable for transplant and grow them in his lab. Then he took those cells and coated the inside of hollow fibre-membrane filled tubes similar to the filter cartridge on modern dialysis machines. He had invented an artificial kidney that could live outside the human body on a continuous flow of blood from the patient and do more than just filter.

The results were incredibly encouraging. In clinical trials at the University of Michigan Hospital, it improved the mortality rates for ICU patients with acute renal failure by half. There was just one problem. To work, the patient had to be permanently hooked up to half a hospital room’s worth of tubes and pumps.

The first time Roy saw Humes’ set-up, he immediately recognized its promise—and its limitations. Fissell had convinced him to drive from Cleveland to Ann Arbor in the middle of a snowstorm to check it out. The trip convinced them that the technology worked. It was just way too cumbersome for anyone to actually use it.

In 2000, Fissell joined Humes to do his nephrology fellowship at Michigan. Roy stayed at the Cleveland Clinic to work on cardiac medical devices. But for the next three years, nearly every Thursday afternoon Fissell hopped in his car and drove three hours east on I-90 to spend long weekends in Roy’s lab tackling a quintessentially 21st century engineering problem: miniaturization. They had no money, and no employees. But they were able to ride the wave of advancements in silicon manufacturing that was shrinking screens and battery packs across the electronics industry. “Silicon is the most perfected man-made material on Earth,” Roy says from the entrance to the vacuum-sealed clean room at UCSF, where his grad students produce the filters. If they want to make a slit that’s 7 nanometers wide, they can do that with silicon every time. It has a less than one percent variation rate.

The silicon filters had another advantage, too. Because Roy and Fissell wanted to create a small implantable device, they needed a way to make sure there wasn’t an immune response—similar to transplant rejection. Stacks of silicon filters could act as a screen to keep the body’s immune cells physically separated from Humes’ kidney cells which would be embedded in a microscopic scaffold on the other side. The only thing getting through to them would be the salt and waste-filled water, which the cells would further concentrate into urine and route to the bladder.

By 2007 the three researchers had made enough progress to apply for and receive a 3-year $3 million grant from the NIH to prove the concept of their implantable bioartificial kidney in an animal model. On the line was a second phase of funding, this time for $15 million, enough to take the project through human clinical trials. Roy moved out west to UCSF to be closer to the semiconductor manufacturing expertise in the Bay Area. Fissell worked on the project for a few more years at the Cleveland Clinic before being recruited to Vanderbilt while Humes stayed at the University of Michigan to keep working with his cells. But they didn’t make the cut. And without money, the research began to stall.

By then though, their kidney project had taken on a following of its own. Patients from all over the world wanted to see it succeed. And over the next few years they began donating to the project—some sent in five dollar bills, others signed checks for a million dollars. One six-year-old girl from upstate New York whose brother is on dialysis convinced her mother to let her hold a roadside garden vegetable sale and send in the proceeds. The universities chipped in too, and the scientists started to make more progress. They used 3D printing to test new prototypes and computer models of hydraulic flow to optimize how all the parts would fit together. They began collaborating with the surgeons in their medical schools to figure out the best procedure for implanting the devices. By 2015 the NIH was interested again. They signed on to another $6 million over the next four years. And then the FDA got interested.

That fall the agency selected the Kidney Project to participate in a new expedited regulatory approval plan intended to get medical innovations to patients faster. While Roy and Fissell have continued to tweak their device, helped along by weekly shipments of cryogenically frozen cells from Humes’ lab, FDA officials have shepherded them through two years of preclinical testing, most of which has been done in pigs, and shown good results. In April, they sent 20 agency scientists out to California to advise on their next step: moving into humans.

The plan is to start small—maybe ten patients tops—to test the safety of the silicon filter’s materials. Clotting is the biggest concern, so they’ll surgically implant the device in each participant’s abdomen for a month to make sure that doesn’t happen. If that goes well they will do a follow-up study to make sure it actually filters blood in humans the way it’s supposed to. Only then can they combine the filter with the bioreactor portion of the device, aka Humes’ renal cells, to test the full capacity of the artificial kidney.

The scientists expect to arrive at this final stage of clinical trials, and regulatory approval, by 2020. That may sound fast, but one thing they’ve already got a jump on is patient recruiting. Nearly 9,000 of them have already signed up to the project’s waitlist, ready to be contacted when clinical trials get the green light.

These patients are willing to accept the risk of pioneering a third option, besides transplants, which are too expensive and too hard to get for most people, and the drudgery of dialysis. Joseph Vassalotti, a nephrologist in Manhattan and the Chief Medical Officer for the National Kidney Foundation says “the more choices patients have the better,” even though he’s skeptical the device will become a reality within the next few years. An implantable kidney would dramatically improve their quality of life and be a welcome innovation after so many years of treatment status quo. “During World War II we didn’t think dialysis would be possible,” Vassalotti says. “Now half a million Americans are being treated with it. It’s amazing the progress just a few decades makes.”

1Correction: 12:50pm ET The Kidney Project is now slated to begin clinical trials in early 2018. A previous version of this article incorrectly stated they would take place later this year. Changes have also been made to correctly identify the size and timing of grants to the Kidney Project.

Related Video

Science

If the Tin Man Actually Had a Heart, It'd Look Like This

A robotic heart points the way to a future where soft robots help us heal.