We tend to think in straight lines. This is how you got here. This is who you are, the same “You” as in second grade, with a bit more knowledge and a lot more experience. All of the successes and issues you have are a result of the line of “You”, across all these years: financial, social, health. These simple stories, with minimal branches (e.g., the college years!!), allow for comfort, coherence, and understanding.
The reality of our lives is quite different from this, however. From moment to moment, the circuits in our brains recognize and interpret patterns, sending “conjectures” to other circuits and so on. The functionality of brain circuits is vast, but the basic process across all is the interpretation and conjecturing about information received, and the dissemination of that interpretation (more information) to other areas for similar processing. Everything we do, from the formation of a “complete” thought to saying something in a meeting to swinging at a ball at the right moment, is the culmination of chains of decisions. Environmental responses (getting an idea shot down by a superior, hitting a home run) are then interpreted and factored into future decisions, making thoughts and actions more or less likely to occur again. The point is, at almost every stage, there is a range of possibilities in which path we ultimately take.
Therefore, the largely linear stories we tell ourselves are vastly incomplete, like barren trees living without leaves when it comes to the complexity in our lives. You contain multitudes, as the saying goes. A linear story can over time collapse our perceived possibilities, we can get better at what we are doing, at the sake of everything else.
How does this relate to Healthcare and Digital Experience Platforms and Apps?
First, for a long time the story of your health over life was rather simple. You almost inevitably died young, relative to current days (if you are in a developed country). This gap in life-expectancy made it pretty easy to ignore things like Alzheimer’s and prostate cancer. If you lived before modern plumbing, you often succumb to infection, or childhood illness, or worse. It’s only into the 20th century that doctors and medical institutions started focusing on chronic illness and illness of “aging”.
One major reason this story was simple, is, unlike your brain, it had very little input about very few things. It truly was a barren tree, and thus, the suggestions fed back to patients were limited (at least in efficacy). Even if one stepped back, the decisions and possibilities one could image remained constricted in scope.
Jumping forward to a few decades ago, Electronic Health Records (EHR) began to collect, store, and in some cases process, richer data from a broader range of inputs: physician notes, imaging data, billing information, patient outcomes, etc.). The development of EHR was driven by this proliferation of new inputs (i.e., information), and the story one could tell about health trajectories became richer. A large part of this richness was in the healthcare options available, which increased the number of branches via available decisions to the care provider, consumer, and payer. After a while, the recognition of this richness emerges in the amount of possibilities one sees available to them. This is one clear way in which information can flow from one system (EHR) to another (the consumer’s cognitive-affective states), and cause visible impacts on health behavior - which, in turn, provide new inputs into EHR).
At the same time that this proliferation of health information in EHR systems yields new possibilities for individuals and groups, it has also revealed MANY gaps in our understanding. There are many reasons for this, but I’ll just focus on one, using our brain example. I mentioned that there are (many, many) circuits in the brain that take in, process (interpret), and propagate that interpreted information to other areas. However, not every circuit can process every kind of information, so each is quite limited in what it can “see” and “send”.
An EHR system (or any system that isn’t universal and ideal) has a limited sightline into any individual’s health behavior, as well as that individual’s environment. In fact, “see” and “send” can become completely detached: much information stored and sent by EHR is unstructured, and of very low utility. The systems that can “see” this unstructured information tend very specific in what they can interpret (e.g., fMRI data). So, traditional EHR’s have stuck with 1) increasing the amount of the information that they can “see”, which limits the visibility of options, or 2) trying to lever in data from other sources at the same time as maintaining the interpretability of legacy data, which is computationally and conceptually expensive. Both options will continue to be pursued, but both have limitations that are baked into the approach.
What’s a new approach? It’s already underway, in the proliferation of platforms/apps, platform/app aggregators, and connector code between computing languages. Similar to our model brain, the underlying catalyst for this is the notion that instead of pursuing a universal system that can take all information in and interpret it in nuanced ways, we pursue specific “processing” units that collect and interpret sharp, specific data within a constrained environment. The task then, is the continually learn to send that sharp, interpreted information to other systems in ways that the can understand and interpret. This likely necessitates a loss of some information from system to system, but it allows for much greater visibility into any individual process, and taken as a whole, can provide a much more holistic view of the person and the environment.
For an example, let’s take someone who has recently been diagnosed with Type 2 Diabetes, and has access to healthcare and additional resources. From EHR data, they can learn about the outcomes of specific tests, medication information, some guidance from their physicians and nurses, and some information about similar individual’s outcomes (I’m simplifying the amount of type of information, but the argument stands). This is a lot better than the 19th century, when you had your memory of your behavior and feelings, and some guidance from a physician, perhaps. But, it still misses much of the information about your life, what you actually do from day to day, what you actually think from day to day, how your environment changes from day to day, and on. Because of this lack of information about an individual’s life, the representation of that individual in any one, or two, or five, systems is incomplete. If we look into the future, however, and look to filling in the picture, we can see that the development and interoperability of multiple systems that each process sharp, specific types of information is a key. For our diabetic patient, we can learn about basic motor activity via actometer information from fitness bands. We can start to learn about risky areas (perhaps filled with favorite fast food spots), from GPS data in phones. We can increasingly detect biologic signals through non- or minimally-invasive means through devices that detect and interpret particular signals in a sharp, specific way.
At Affective.Health, we focus on collective cognitive-affective information from individuals through conversations. The interpretation of information contained in conversation is still the primary way in which mental disorders, addictions, and other diseases are diagnosed, and Affective.Health and a number of companies have existing products (or products in the pipeline) that provide automatic interpretations of well-defined conditions in this space.
One area of focus at Affective.Health is the collection and interpretation of individual’s feelings, beliefs, and motivations. Cognitive-affective concepts such as intent, belief, and motivation underpin our behaviors in a wide range of areas. The concepts are also amenable to state-like interpretations, that is, they fluctuate over time and in the presence of different contexts. The Affective.Health DXP is designed to understand and interpret individual cognitive-affective states, through a conversation-like process, over time, within context. The focus on this type of information allows us to sharpen models and predictions, and limits the information-loss when interacting with other systems.
Over the next few years, Affective.Health, and many other companies in the healthcare space, will continue to hone collection and interpretation of specific types of information, as well as work to increase interoperability with other platforms/applications. The ultimate goal is not the creation of a brain that processes all information, but the clear, holistic development of individual and group health possibilties. Some areas will be more important for some aspects of health, and the flexibliity of an adaptive, aggregrate system will allow for better and better focus when this is the case.
The ultimate goal - if one is benevolent - is a highly efficient set of systems that collect, interpret, and represent ALL the possibilities available to an individual moment-by-moment, providing complete insights into the decisions that affect their health. Until then, we work to sharpen our instruments and discover the possibilities that remain undiscovered.