• Neurosurgery: The Next 50 Years

    Author: Alan M. Scarrow

    If the editor of a neurosurgery journal ever asks you to write an article about predictions for the next 50 years of our specialty, the best answer is “no.” It’s a fool’s errand—particularly in the medical world, where knowledge is accumulated and recorded by experiment and observation. Predictions are for psychics and astrologists. Any thinking person knows that as the world becomes more uncertain and changes more rapidly, predicting the future becomes exceedingly difficult. Even when we are aware of and understand the innovations around us, it is hard to know how they will circulate and trigger changes in unforeseen ways.

    Author Steven Johnson notes in How We Got to Now that Gutenberg invented movable type printing in the early 1400s, which prompted new readers to recognize they were farsighted, which begat eyeglasses, which led to the microscope, which allowed Robert Hooke to describe cells 200 years later, which paved the way for a revolution in biology and medicine—hardly a set of foreseeable events.

    But the human brain is a nonstop prediction machine. It is always trying to figure out what’s coming next and craves certainty. So while predicting the future may be the stuff of crystal balls and Ouija boards, perhaps a reasonable task for us non-magic folk is to first try and understand the barriers that keep the future from arriving sooner. What lack of knowledge, technology gaps, economic forces, or regulatory shackles keep us where we are? Those are the types of questions that illuminate the future more than reckless predictions.

    A public health professional would likely say that the best-case scenario for our battle against neurologic disease in 2065 would be to simply prevent it from happening in the first place. This would require filling at least two large knowledge gaps. First, we would have to understand the underlying cause for common neurologic conditions like arthritis, tumors, aneurysms, traumatic, and cognitive disorders. Second, we would need to understand the behavioral choices that either cause or contribute to those maladies—and more importantly, we would need to have the ability to influence patient behavior and avoid the inherent risk of those choices. Both knowledge gaps seem immense, but the first may be easier than the second.

    Two years ago at the CNS Annual Meeting in Chicago, Google’s Director of Engineering Ray Kurzweil detailed his oft-noted observation that human knowledge grows exponentially over time, as evidenced by the number of patents, volume of information, and computing capacity. He asserts that human knowledge is now beyond the inflection point of the exponential curve, which will allow us to make extremely rapid improvements in the prevention of disease and significant advancement in life extension.

    Whether or not Kurzweil is right about life extension and the prevention of disease, it is clear that our knowledge is growing quite rapidly. Every two days we create as much information as we did from the dawn of civilization up to 2003. Perhaps this is because we have more scientists. The number of working scientists grew from 4.3 million to 6.3 million between 1999 and 2009. And that doesn’t include scientists in the entire country of India. Does this mean we will understand the cause of most neurologic disease 50 years from now? The trajectory of knowledge indicates the odds are with us.

    But even if we knew what caused neurologic disease, what would we do about it? The struggle of most developed countries to control chronic disease indicates there is great difficulty answering that question. For example, consuming too many calories leads to obesity, obesity often leads to Type II diabetes, and Type II diabetes leads to all kinds of illness. This is not a secret, and yet it has been nearly impossible to control people’s eating habits regardless of culture, race, or ethnicity. It’s not that surprising. After all, no matter the language, instructing patients to say “no” to ice cream and expecting it to stick when they are in the privacy of their own home or the anonymity of a restaurant is a fantasy. Clearly we don’t always make choices in our best interest, and coming up with ways to influence behavior in politically and economically satisfying terms is daunting.

    However, according to Nudge authors Richard Thaler and Cass Sunstein, those choices may improve when we gain experience, have good information, and receive prompt feedback. For example, while it is easy for us to choose our favorite ice cream flavor for dessert, it’s not so easy choosing between ice cream and fruit (or no dessert at all) when the long-term effects of the choice are slow and the feedback is poor. If there was reliable, immediate feedback about the long-term consequences of choosing ice cream over fruit, we might have a chance. Will we be able to influence our patients in ways that lead to better choices and simultaneously keep their fundamental rights of liberty and privacy? I suspect maybe a little, but it seems very likely we will be dealing with the consequences of poor behavioral choices for a long time to come.

    If we presume that in 2065 we will understand the cause of most neurologic disease but not be able to prevent them from happening, what will be the role of neurosurgery in treating those maladies? In part, the answer may lie in our specialty’s name. Who would want to have surgery of any sort unless they absolutely had to? Obtaining all the benefits of surgery without having to go through any of the risks of surgery would seem to be a worthy goal. What would we have to overcome from a technological perspective in order to perform “non-invasive surgery”? Imaging would be a prerequisite. In some futuristic “Bones” McCoy way, we must be able to visualize that which we propose to treat, whether degenerative, tumor, traumatic, vascular, or otherwise. Once able to see it, we would need therapies that were small enough to transgress the skin or other natural orifice and attack the disease process. Nanomachines or molecular machines are nouveau instruments that get thrown around as examples of those therapies. Huge investments and advancement in nanotechnology in the past 10 years combined with Kurzweil’s exponential growth theory push me to believe that 50 years from now those kinds of technologies would be available. Moreover, as we are able to generate more personalized data about each patient, our ability to tailor individualized therapies seems even more likely. To put an even finer point on it, with these presumably portable diagnostic and miniscule personalized therapies, would there be a need for clinics and hospitals? And to create even more discomfort, would there be a need for people with highly trained eye-hand coordination like, say, surgeons?

    This leads to the last big barrier that keeps the future at arm’s length. There are economic and regulatory (i.e., ethical and political) realities that may seem tiresome in such a highminded discussion about the future, but these are the limits we choose to put on ourselves. Sometimes they reflect our priorities, such as spending more for education or defense and less on healthcare. Sometimes they reflect our fears that we will be unable to control the consequences of new ideas, such as genetic enhancement therapies. But whether it’s 2015 or 2065, I hope we are just as thoughtful about those issues and that we approach them with honesty, integrity, and all the freedom and clarity of thought they deserve. There should always be important questions like: Will some therapies only be available to those who can afford them? Who will be able to diagnose and provide therapies, and with what education or qualifications? Who will decide when therapies are indicated or futile? And when do individual choices produce too great a burden for the rest of our community?

    No matter how much our brains may want to know the future, the inherent limitation of our experience is that we can only imagine it to be some version of the present. In a Western mindset of indefinite optimism, we may want to believe that there are inevitabilities such as the reduction of disease and the ease of suffering that will make our lives and those of our descendants better. But far more common than inevitabilities are the complete surprises— those events we never saw coming because the things we know we don’t know are overwhelmed by all the things we don’t know we don’t know. I think that tiny yet powerful suspicion in each of us—that we have come so far but have so much further to go—is what drives us to get better, to push the boundaries within the small world of what we know and into the vastness of all that is unknown. For in the end, as the great former U.S. President Abraham Lincoln is often quoted as saying, “The best way to predict the future is to create it.”

We use cookies to improve the performance of our site, to analyze the traffic to our site, and to personalize your experience of the site. You can control cookies through your browser settings. Please find more information on the cookies used on our site here. Privacy Policy