Influencing doctor behavior for better patient outcomes (shared decision making and patient engagement)

Here is a semi-transcript of a talk I gave this morning at the OSEHRA Summit in Bethesda Maryland. I was very nervous, but I got several hugs afterward, so I think it went ok. This is the first significant thing I've written about my new job since I talked about using password managers, and this one is much more personal. None of the ideas are mine, I've just glommed on to them over the last 18 months while working at PDA.

Good morning.

I want to reassure you that I am not going to be talking about some stupid operant conditioning tricks with points and badges like all those pseudo-science VC funded phone apps use. My perspective today is as a long time open source user, developer, community member, with a very personal motivation for coming to work in healthcare.

 

I spent a large portion of my career working on some well known open source projects such as MySQL and Ubuntu. And then a couple of years ago I was trying to cope with the suicide of my little brother, which weighed even heavier with me as I considered that my cousin and grandfather had also committed suicide, and recalled the failed attempts of several other close relatives. My impression of behavioral healthcare was of a deeply corrupt system filled with neglect, abuse, rape, and death. There was a comment toward the end of the open source business models session yesterday pointing out how most of the highly successful open source projects were universally applicable – databases are used in many different verticals. As I was contemplating what I was doing with my life I decided then that working on open source tools in general was not sufficient.  There is in fact great honor in building general-purpose tools for human society, but it was too abstract. In order to feel at peace I needed to work on something more concrete, more connected to the end user. And so I decided to go into healthcare, that although it was too late to help my brother perhaps I would feel better if I was able to help someone else.

 

After some time searching, I came across an amazing little company, a company where some patients had built their own tools: Pat Deegan PhD & Associates, LLC. Dr. Deegan is an amazing person who was diagnosed with schizophrenia at a young age but rejected the clinical advice to sit back and live her diagnosis. She recovered, earned a PhD in Psychology, and accidentally started a software company when she realized that building tools was the most effective way to help practice, what really happens in the field, catch up with science, what evidence tells us are best practices for treatment. I submitted a proposal to speak here because I think what I’ve learned while working on CommonGround with Pat is really interesting for anyone who is building medical software – perhaps most significantly because it is software that was designed and built by patients themselves! I hope you are able to apply some of these ideas in whatever part of this grand old ecosystem you are working in.

 

As technologists, as clinicians, we often find ourselves in the role of expert, of authority, of accountability, and we slip into a mode of knowing the answers. After all, the client/patient/consumer has come asking for our expert knowledge to be applied to a complicated situation. Combined with a heavy workload, a high volume of information to be processed, it’s so easy to slip into a parental attitude. This can overwhelm the interaction. After all, it’s well and good for the patient, or the customer to have concerns, but they aren’t the ones being held accountable for maintaining KPIs, treatment outcomes, costs in a fee-for-service setting where the diagnostic related group codes change the bottom line. Kind of funny: I’ve read that DRGs themselves were originally intended to be able to influence physician behavior. Even with the best of intentions a mandate or clinical philosophy will eventually fail under the stress of real world practice without systems that support and reinforce intentions.

 

This situation is exacerbated in behavioral health settings, where the common perception is that the patient is not only less educated and informed than the doctor, but also an unspoken assumption that decision-making ability is impaired. After all, here is someone currently experiencing psychiatric symptoms. Turns out that this is not true. Careful research in a large national study, Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE), showed decision incapacity is rare even in people diagnosed with psychotic disorders (Lieberman, 2005).

Then we have career goals – as technologists and clinicians we are understandably fascinated, even obsessed with “making a difference”. This means trying to move the average.  You might start looking at data, at statistics, trying to figure out how best to direct your efforts to help the most people. You might dream of a big breakthrough like washing hands, using electric razors, a new drug, a new therapy approach, a novel application of the wealth of technology we have in our toolbox. We build decision support systems for practitioners that warn of drug interactions, we strive to incorporate evidence-based practices in every facet of our work. Who could argue with evidence-based medicine? Large samples, algorithms, science!

 

And evidence based practice, while wonderful, has its limitations. Sad fact is that there are still many cases where there is no obvious treatment choice, where there are multiple treatment options with no evidence to recommend one over the other. Yet these treatment options have very different side effect profiles with profound effects on a patient’s day-to-day life (separate from the symptoms that they are presenting with). Do you want the medication that causes a permanent neurological movement disorder, the one that causes diabetes, or the one that leaves you too sedated to maintain crucial personal relationships? Is it really recovery if you are so sedated that you stop hearing voices but you also lose your job and custody of your children? In these situations, there is a moral imperative to involve the patient in the decision. The algorithms and meds might dampen my symptoms, but they won’t make me loveable, save my marriage, help me keep my kids, or help me keep my job.

 

It’s so easy to focus on big data, and forget about the single life that is in our hands, and there are very real cognitive loads that cause us to drift in the wrong direction. We have to keep the patient at the center of the care team, but just trying harder isn’t enough. Patients were even mentioned as part of the ONC 5+1 guiding principles: the patients are why the other 5 principles matter. Simply looking at re-admissions, at FTEs, at use of services, won’t tell you about true treatment success. Recovery means being able to live your life, rather than your diagnosis.

 

Lets get specific, and explore more detail about shared decision making: it’s not just me saying this, and it’s not just behavioral health where SDM matters. The idea of shared decision making, of taking a patient centered approach is in fact a crucial mandate and clinical philosophy: PPACA Patient Protection and Affordable Care Act, Section 3022: Accountable Care Organizations. ACOs must communicate clinical knowledge in a way that is understandable and engage in SDM that takes into account patient preferences and values. http://en.wikipedia.org/wiki/Patient_Protection_and_Affordable_Care_Act

Section 3506 further reinforces the requirement for programs to support Shared Decision Making.

 

Trying to do SDM is the very definition of a complex system (using the Cynefin definition that Dr. Shannon from NHS Leeds mentioned yesterday) – a system where there are multiple confounding factors. Where probe-sense-respond is needed. How do we build a system that achieves shared decision making with a patient experiencing elevated stress levels while staying within a 15 minute consult? I don’t know about you but when I’m at the doctor I am nervous, and I always remember something I needed to discuss when it’s too late. And how could the doctor possibly remember to bring up that personal question when they are so rushed from one patient to the next.

 

Just like ‘safety’, patient engagement is an emergent quality that comes from interactions of the system as a whole, not a singular ingredient or a property of one of the components. Being patient-centered is not a checkbox or something that you can assign someone to code. Adding a client portal where you can download a PDF and see an appointment calendar does not magically result in patient engagement. Also remember that just because we build a fantastic tool does not mean that people will use it in the real world after the implementation team leaves. Systems are not just the software. Systems include the people using the software too. And the users of the software are not just the prescribers and the payers – the patients are clients of the software as well! It’s not enough to create the tool – the software is the easy part. We also need to help people learn to use the tools at the local level and understand the context and daily workflows that the tools are used within. Dr. Reider mentioned yesterday “if you are a developer and you are not watching people use your product, you are not doing your job”. That is so congruent with my experience.

 

This is so important because tools are not neutral. We must remember that in healthcare we serve a vulnerable population. We are interacting with people who are not “the best they have ever been”. When dealing with those situations where a medical decision has ethically become a personal choice, the tools, the system, will either obscure or preserve the patients voice for the care team. Does the software obscure the patient preferences by submersing patient identity among a sea of facts, or does it reflect, does it focus and amplify the patients voice by making sure the patient is presented to the care team as a person? Are we facing problems with treatment non-compliance, or with preference misdiagnosis? Are we using big data that has no face, or are we using little data: the patents voice as a guiding signal among the noise of treatment options?

 

For me, even with my long background in open source, I found a surprising hero – this hero has shown up again and again in open source settings, and has also appeared in the recovery movement: the expert consumer. CommonGround is a system built by expert consumers that implements shared decision making in a very sophisticated way.

 

CommonGround includes 4 main concepts that the system revolves around. I’ll briefly describe them and then go into much more detail about 5 generally applicable principles for software design.

 

Personal medicine is how I help myself. It’s not self-medicating. It’s the things I do in addition to the pills I take. It’s short, a few sentences.  For example, when I’m feeling unwell, the things I do that help me feel better are: taking a walk, reading a book, volunteering at the library.

 

Power statement is how I advocate for myself so that my treatment supports my recovery. It introduces me to my doctor as a person, not a patient. It says how I want psychiatric medicine to help me. And, it invites the doctor or nurse to collaborate with me on finding the best treatment to support my recovery goals.

Here is an example of a Power Statement:

I love my girlfriend. I can't be with her if I am paranoid. I want to work with you to find a medicine that will help me be less paranoid so I can be with my girlfriend. But it can't have sexual side effects

Decision support not just for the clinician, but for the patient and sometimes friends and family! Decision support involves learning about my diagnosis, my options, and what has worked for others in my situation.

 

Shared Decision Making where you figure out together the best choices for your recovery.

 

In the process of implementing these, we’ve discovered some software design principles that make a big difference. I just run through 5 of them here.

 

The first principle is the use of narrative fragments. Both the personal medicine and the power statement are highlighted at the top of the patient record in CommonGround. It turns out that a cold list of medical facts about someone like the meds they are on and how their symptoms are charting don’t do a whole lot to help you view that patient as a whole person with a life outside the diagnosis. As humans, we relate to each other not by sharing facts, but by sharing story fragments. Importantly, personal medicine and power statement are narrative fragments that have been written in the patients’ own words. This makes them even more powerful as an aid to recalling the patient as a person. Being written in their own words is also essential for the patient – how much easier it is for them to engage with the software when they recognize what it says. When I introduced myself at the beginning of this talk, and I told you the fragment of the story about my family and why I decided to work in healthcare, you saw me differently – couldn’t help it! You suddenly paid attention a bit more than if I had just recited some cold facts about my professional history. Think about the difference in engagement and activation when I read my own words saying that I feel better when I walk barefoot on the beach compared to reading pre-canned words saying to do more cardiovascular exercise! They are the same thing, but one results in patient engagement and the other in patient tune out. One results in viewing the patient as another number, and one results in shifting perspective, in preserving and amplifying the patients voice.

 

Having my power statement, my own words advocating for how I want help with my recovery, aligns the entire care team around my recovery goal. When you are working on a health record, ask yourself – does this introduce the patient as a person or as a cold collection of facts? On a screen about the patient, how much of the screen is about the patient? It’s essential to protect and amplify the patients voice through the entire care trajectory.

 

The next principle is accessibility. And I don’t mean just section 508 accessibility, I mean able to be used by a vulnerable population with a diverse range of technical literacy. When we are creating tools we are operating from a position of great privilege. This is where it becomes so important to watch people using your software. Make your software driven by touch. CommonGround started off on touchscreen kiosks, but today that really means design for mobile first. Make the entire interface drivable by touch, and provide the option at key points of “saying more”, of entering narrative fragments. You know some software everyone knows how to use? The elevator. Some other software everyone knows how to use? The microwave. I’m sure you’ve all seen the videos of babies figuring out how to use iPads – touch is a powerfully simple and intuitive way to interact with a software system. Don’t think you are getting off easy – microwave level clarity in the UI does not mean dumb, primitive interactions – it means expressing sophisticated interactions in such a way that the technology disappears, melts away from consciousness.

 

A third principle is related to decision support. One aspect of decision support is describing what has worked for others. In order for people to engage, to feel motivated to try something that has worked for others, there are two key things you must accomplish. One is that you must not condescend. People can pick up very quickly when you are presenting them a sanitized, simplified version of something. Use an authentic voice.  Find and use the voice of lived experience – it’s very powerful. We have found that short video clips work very well for this. The other thing is that people need to recognize themselves in the person telling the story. Imagine you are a veteran, looking for some examples of what has worked for other people with PTSD. Are you going to respond more to hearing about how meditation, yoga, and exercise have helped some rich person who never enlisted, let alone saw combat? Or are you going to identify and respond to a story from another veteran relating how relaxation techniques helped with PTSD? As you can see, there is a huge range in quality of implementation, some mindfulness about the interactions of the real person with the software system results in huge differences how engaged the patient is. We are currently exploring ways to let people share their own stories. You might call it crowdsourcing, constantly trying to expand the number of different people and life stories that are represented.

 

Another principle is personal data mining. This is easy to get wrong due to well meaning attempts around patient privacy – don’t let the obligation to protect privacy result in you hindering the patients right to disclose. Every graph or report I see, I should be able to get the raw data that draws that graph and remix it – not just the picture of the graph, the data behind the picture. If the hospital CEO can data mine, can I do personal data mining and see how my shared decisions have been affecting my recovery, and how that compares to the big data averages? Or is that information reserved for the people making money off of my illness? This is little data – my recovery goals and progress need to be available as a guiding signal among the noise of multiple treatment options. So often what appears to be treatment non-compliance turns out to be preference misdiagnosis. The patient must be able to discover and react to the data just as the care team does. 

Finally, we have the principle of being bi-directional. In order to bootstrap trust and patient engagement, you need to make your software bi-directional, transparent.  Transparency is another one of those values from the ONC office. We alluded to trust earlier talking about how good people are at identifying and tuning out a dumbed down, sterilized message. Being bi-directional means cultivating the ability to speak to both the patient and the doctor at once. You might ask yourself – in my software system, how many screens are designed for the billing folks? How many for the patient to see? How many are designed for the doctor to see? How many of them can they look at together? Can the patient see who has accessed their information? Do both the patient and the doctor get to look at the prompts from the decision support system? Once again, this is not a checkbox but a quality that we work towards.

Thanks for listening.  If any of this has provoked some thought, I’d love to meet you, learn your story, and hear your experiences and ideas about these topics.