Jeffrey Herron (’05)

Jeffrey Herron

Dr. Jeffrey Herron is affable, energetic, and clearly excited about what he does. He nonetheless starts our interview with a caution about the nature of his work:

While there are a lot of exciting aspects to the work that I do, at the same time, there is the danger of appearing to make promises that the neural engineering field can’t yet keep. Every time some researcher makes a promise the field can’t keep or implies that we will have technological breakthrough this year that might be possible in five or 10 years, it hurts our long-term funding.

I assure him I will be mindful of the complex nature of his research. We are sitting in his neat and spare office in one of the newer Computer Science buildings at the University of Washington.

What are you working on now?

I have a PhD in Electrical Engineering, and I serve as research faculty in the UW School of Neurosurgery. My job is to work with the neurosurgeons on the technical aspects of translating new findings in pre-clinical trials or new technologies into feasibility studies to improve treatments for human patients in the short to medium term. What that entails is staying on top of new technology and writing up investigational device research proposals that n many cases have to be reviewed by both the FDA and the institution before we can proceed with our work. Investigational devices are new non-commercial medical devices which are either implantable or involve a new stimulation technique that we are testing out with patients to advance medicine.

What you are working on sounds a bit like a pacemaker for the brain.

That’s exactly what the Deep Brain Stimulator originally was. The history of neuromodulation for movement disorders has its start when doctors would note that patients suffering from disorders like Parkinson’s Disease would have relief from symptoms when they suffered a brain injury. This was many decades ago, and science had only very basic notions of how the brain worked and how the parts of the brain interacted with each other. Having noted the benefit of a brain injury in one patient, they would then lesion the brains of other patients in an effort to reproduce the desired outcome. At some point, they started placing electrodes and providing electrical stimulation to the  brain and noticed that they could also make symptoms go away by this method. They were also able to determine from the use of electrodes whether they were in fact lesioning the correct part of the brain.

So, the use of electrodes reduced the guesswork of lesioning the brain, since electrodes could be moved, but lesions could not be undone?

Yes, and this happened in the wake of the successful introduction of the cardiac pacemakers, when neurosurgeons realized  that they could use the same device to power an electrode in the rain. The first few neurostimulators were in fact off-the-shelf cardiac pacemakers, and the devices we use today still work in the same way. Unlike pacemakers, which have gotten very sophisticated because we understand the heart so well — they monitor what the heart is doing in real time and respond appropriately, such as speeding up when a patient is climbing the stairs — we still have some ways to go to get beyond applying a stimulus at a set frequency and amplitude by better when and how stimulation should be delivered by better understanding the complex dynamics of the brain and provide better therapy.

As a lay person, I can imagine the interactions of different parts of the brain is much more complex than the functioning of the heart.

We have a pretty good understanding of what individual parts of the brain control, but the dynamic and real-time interaction between parts of the brain, and the decisions about where to apply deep brain stimulus, is what is most challenging. An example of this is past work done in a partnership with Stanford to identify a closed-loop algorithm for Parkinson’s Disease, where we monitor the neural signals present in the sub-organ   of the brain we are stimulating and respond appropriately to the feedback from the sub-organ in real time. It seems like it is working pretty well.

So, in the same way that the heart will tell the pacemaker what is going on, and the pacemaker will respond in real time, you are creating a feedback loop for the control of motion by brain stimulation?

Right. But it depends on what we are dealing with. With Parkinson’s Disease you usually have the patient experiencing full body symptoms while at rest, whereas another neurological movement disorder, Essential Tremor, is marked by a tremor in only one limb that is present only when the patient is using that limb. We’ve had some success with Essential Tremor in a feasibility study here at the UW where we put electrodes on the surface of the brain. When you have sensors on the surface of the brain you can know when the patient is using their arm via a computer-brain interface. You see a drop in band power at certain frequencies. It’s pretty neat! The use of the patient’s arm, for example, causes the stimulation to kick on to control the tremor. What the work we are doing on neuromodulation therapies that now exist for all the neurological disorders comes down to three questions: 1) Where do we need to stimulate; 2) How do we need to stimulate; and 3) Where do we need to sense. That is where there’s ongoing work across the field in Epilepsy and Dystonia and other conditions.

What is Dystonia?

Dystonia is often caused by an injury at birth, which causes young people to have uncontrollable repetitive motions or twist themselves up into abnormal postures.

There is so much here that you have laid out very concisely. How did you find your way to this work when Electrical Engineering might have taken you in so many other directions?

As an undergraduate at the University of British Columbia, I had some exposure to the growing field of neuroengineering by attending an IEEE EMBS (Institute of Electrical and Electronics Engineers, Engineering in Medicine and Biology Society) conference in Vancouver, Canada. I had a chance there to learn more about the whole brain–computer interface (BCI) field I had read about, and I thought, ‘That sounds kind of cool.’ Then upon graduating, it was the Great Recession, which was a little rough. I was fortunate in that I had a great internship as an undergraduate at Microsoft working for Tom  Blank, who is the father of one of my Overlake classmates (Caitlin Blank ’05), and he actually ended up hiring me on as a vendor consultant for a hardware engineering prototyping group in Microsoft Research. Tom was  an absolutely wonderful mentor, and he hired me on knowing that I wanted to go back to grad school and pursue studies in neuroengineering and figuring out interfaces with the brain. He supported my applications to grad school, and I ended up coming to the University of Washington specifically because of  the NSF-funded Center for Sensorimotor Neural Engineering (now the Center for Neurotechnology). While in that program I performed research into closed-loop DBS methods and wrote some APIs (software interfaces) for implantable devices used as part of our research which caught the attention of Medtronic when I was graduating.

Working as an engineer at Medtronic in Minnesota gave Dr. Herron experience with next generation tools, which led to his current job back at the UW. The cycle between University research and applied engineering for use in medicine is an important aspect of how the  feedback loops of research, clinical trials, and work with the medical community reinforces itself to advance treatments and improve quality of life.

Specifically within the neuroengineering space, the science advances hand in hand with developing new tools at medical standards. That’s where my experience as an academic working with research tools led to work with Medtronic, and working in industry on medical grade hardware and software made me an attractive hire back into academia

Just to be clear, is your primary work with the coding of the devices?

I do a lot of coding, but this is where it gets kinds of interesting, because what I do involves things at the systems level of engineering, the taking of all the pieces to put them together for a novel application. What that has involved in the past for me is everything from printed circuit design to firmware on devices, to sometimes packaging those devices up and establishing communication protocols between devices and external computers. I also have designed interfaces on those computers to abstract those communication protocols and develop an application on the top side that can control and do all kinds of different things. So, a lot of that is software programming, but a lot of it is also analog and hardware digital design.

So, you are more holistic in your approach?

Yeah, and that’s an interesting dichotomy because in a lot of ways the assumption about someone going off and doing a PhD is that they are drilling way down deep into something specialized and arcane. There is some truth to that in my case, but in order for me to do what I do, I have to have a sufficiently broad skillset to work with neuroscientists, IC designers, and medical device manufacturers, and to teach students how to do all this stuff too — to see the whole system and understand its key parts.

Are these devices, once they clear the development stage, something that is controlled wirelessly and don’t require an external device on the person?

Yes, in fact one of the devices I helped develop for Medtronic was fully implantable, and the patient didn’t have to wear anything. Researchers could communicate in real-time with the device wirelessly within a one-meter radius of the patient. They could adjust the functioning based on data streamed directly from the patient’s brain to fine tune the device. In some other research applications, the patient can wear a backpack if their device always needs to be wirelessly connected to a computer in order to do real time calculation necessary for optimal functioning. The key is to be able to develop tools that allow progress. I like to build technology that other people can leverage, but that is the tricky part. I have made my share of bad design decisions that have made research systems difficult to use for research, though still safe for patients. Each new system is a learning opportunity.

If you think back to your years at Overlake, was there anything about your experience with a particular faculty member or class that informs your work now?

There’s a piece that everyone says about Overlake, which is that t prepares you for college quite well, especially on the writing front. I actually do a lot more writing as an academic than you might expect. I write papers. I write grants. I write applications for investigational review boards. I spend more of my time writing than anything else. I wish I was writing more code or designing circuits to be honest. Overlake was very key in the sowing of those seeds for what I do now. I had an especially good experience working in the stagecraft side of things. When I was at Overlake there were these lighting gurus who had tenures, doing all the lighting for performing arts for a period of time. Bill Johns was the one who taught me about it, and while I can’t say that stagecraft led directly to my work now, what was lovely about it was that it gave me a creative outlet as a young teenager, and learned that creativity and technology weren’t necessarily  mutually exclusive. And being entrusted with the responsibility was empowering. The mentality of stagecraft is  the  show  must go on. When a screw-up happens, you have to push through it, make on the fly decisions and solve the problem. And you see the immediate effect of those decisions. To this day, some of those mistakes I made inform my work now. There are times to shoot from the hip, but more often than not, a more considered  approach succeeds.

Another feedback loop? What’s the astronaut’s saying? “If you have ten seconds to act, think for nine and act in one.”

Yes! That sounds about right.

Growing up, was there a lot of science and computers in your family?

Yes, my father still works at Microsoft, primarily as a coder. Basically, there has always been a technical side to my family, and that definitely contributed to my choice of work. I also learned to use a keyboard from a young age, and that accelerated things as well.

Like Stevie Wonder! If you keyboard enough, you don’t have to see what your fingers are doing.

That’s a good point! Going back to faculty at Overlake who were particularly influential, I have to name Bill Johns, Sarah Gallagher for her rigor on the writing side, and on the science side, Lisa Orenstein. My homeroom advisor Christine Miller was also great, and I have lots of fond memories of learning Japanese from Motoko Hayashi.

Looking ahead, what is your hope for your work? What would you like to accomplish that is not yet done?

I would like to see Parkinson’s eliminated for a start. One of the depressing things about working on the devices side is that it is never going to be a cure. It modulates symptoms. I try to keep  that in mind. Ideally, we would not have a need for the devices I develop. Parkinson’s is caused by the brain not being able to regulate its dopamine system, which is why the first step in treating it is prescribing L-dopa, which the body can make into dopamine, but it’s also not a cure. Our current understanding is limited, and we don’t really know the cause. As long as we don’t know the cause, we won’t be able to develop a cure. That’s why NSF and other organizations are pouring funds into this research. In the meantime, my work helps patients better manage their symptoms. Long term, my hope is that through our work we can understand the brain better such that we can improve the treatment and the lives of people suffering from neurological disease or injury. The UW is broadly focused on this area, including helping people recover from spinal cord injury, brain injury, and a new stroke project.

The more we dig into these things, the more we learn.

Indeed, and something I find interesting is the non-technical side to the neural technology space as well. I’m also collaborating with philosophy students and faculty here to understand the ethics involved. There are interesting questions around conditions like autism and Aspergers. Are you enforcing a norm on them when they are just thinking different? What’s the ethics of changing the way someone thinks just because they think differently? The UW is really strong in the ethics component, which makes me happy. These themes are things we have been writing about as a culture, and now our technology makes these questions more urgent.