Dr. Gidi Stein is co-founder and CEO of MedAware, which uses machine-learning algorithms — artificial intelligence — to reduce medication-related risks and promote patient safety. A practicing physician, researcher, and long-time entrepreneur, Dr. Stein graduated from Tel Aviv University Medical School and specialized in internal medicine. He also earned his PhD in Computational Biology from Tel Aviv University, dedicating his research to optimize the treatment of breast cancer.
Health information plays a vital role in guiding care priorities, diagnoses, reimbursement, research and drug development. However, information anxiety — an overabundance of information — is also causing healthcare professionals to tune out to the constant background buzz. Information can lead to provider burnout and suddenly become a nuisance.
In this Medika Life exclusive, we speak with Dr. Stein about his efforts to harness the power of AI and clinical data to identify patient-specific medication risks throughout the patient journey.
Physician Desire to Heal Can Be Augmented by Tech
Gil Bashe: As a physician, scientist, and innovator, you’ve dedicated your career to using your knowledge of healthcare, technology, and data to help solve the challenges healthcare professionals are facing today. As more medications come to market, one of those challenges is surrounding the safety of medication prescribing and monitoring, specifically for drug-drug interactions. Can you tell us a little about the current landscape of medication prescribing and how physicians and patients are impacted?
Dr. Gidi Stein: The impact is of a totally different magnitude than when I started to practice medicine. There was an adage then that every physician had to know 50 medications really well. There were only a few hundred medications available, and patients received perhaps two to three medications regularly, so the probability of drug-drug interaction was relatively low. But today, when hundreds of new drugs enter the market annually, there is no way that any physician can know even 10–15% of them, and definitely not all the possible interactions and contraindications.
Bashe: Can you tell us more about how current systems used for drug-drug interactions are designed and managed?
Stein: Currently, the state-of-the-art systems used in most medical institutions are clinical databases that are manually curated and updated by clinicians and pharmacists tapping the latest clinical evidence and guidelines. These systems contain hundreds of thousands of potential drug-drug interactions and other rules such as dosages. Most databases generate alerts in the electronic medical record (EMR) when medications that shouldn’t be taken together are prescribed to a patient. In concept, this approach should be sufficient to eliminate most drug-drug interaction errors. In reality, it doesn’t happen.
Bashe: With all these databases and knowing the essential mandate to protect patient health, how do medication errors still occur? How can artificial intelligence improve the provider and patient experience?
Stein: Many good people have put in the time and effort to create state-of-the-art clinical knowledge bases, but these data do not necessarily translate into real-world clinical value. In reality, these systems generate a very high alert volume. Most of these alerts are seen by clinicians as false alarms, which drives “alert fatigue,” and often results in clinicians ignoring the alerts altogether. Ultimately, this puts patient safety at risk and actually reduces clinicians’ confidence in the alerts that were designed to help them.
Unfortunately, there is a discrepancy between the knowledge stored in clinical databases and the limited clinical value these systems provide. One of the main reasons for this is that while knowledgebase rules are accurate, they aren’t necessarily relevant for a specific patient in a specific clinical situation at a specific time. Personalizing rules for each patient by using AI establishes alerts for specific, relevant clinical situations. This can reduce the overall alert burden and increase clinical accuracy — thus improving clinician compliance and, most importantly, patient safety.
Bashe: Recognizing these gaps and opportunities, you founded MedAware to create a physician- and patient-sensitive, intelligent system that could leverage thousands of data points to provide both a panoramic view of prescribed medications and potential errors and personalize outputs for specific patient needs. How does this technology all come together and how has it been received by the healthcare community?
Stein: We recently showed the results of one of our implementations to leaders of a leading hospital. The chief medical officer stood up and said “We’ve been slaves to the EMR for years — we’ve been entering data relentlessly and now, for the first time, our data works for us.”
This is the magic of speaking to people’s needs rather them having them adapt to what’s available. We use data hidden within the EMR to understand how physicians practice and how they treat patients in varied clinical scenarios. By combining that intel with the knowledge from the drug interaction databases, we are able to personalize alerts to reflect the clinical needs of that specific person, and ensure the alert is medically relevant for the individual patient.
Bashe: Does this system provide direction to the physician on what steps should be taken after receiving an alert?
Stein: Artificial intelligence (AI) is geared to support physicians. It’s not a replacement. It would be presumptuous for AI or a computerized system to suggest something to a physician and the medical team. What we can do is identify potential adverse drug events very early on and perhaps prevent them. We’re not simply assessing the medication at the moment of prescribing — we’re also monitoring the patient, post-dispensing, looking at any changes in their clinical status or datasets that this patient may present. If we identify that this patient suddenly has another side effect, it is possible to capture this early on, and associate it with the potentially offending medication.
The system notifies the care team that there may be a suspected interaction, and that the clinician and patient should consider taking a lab test to confirm. We’re not telling clinicians what they should do; we’re just providing a safety net of continuous monitoring to ensure that patients remain safe. This is another one of the unique features and main benefits of this system.
Bashe: You are synthesizing vast amounts of data behind the scenes. How does it work? How does it relate to the institution’s own IT network?
Stein: On the technical level, we have our own server that resides either onsite at a health system or in the cloud, and it communicates directly in real-time with the institution’s EMR system. In order for the algorithms to work, we take the data from the EMRs and process so that system compares patient records across different geographies and technologies. We then apply AI on top of that, which is agnostic to the specific location or the technology being used onsite. At that point, we are able to get a clear picture of the patient’s clinical situation, understand the mechanisms by which clinicians make mistakes, and create models that would specifically address these situations. We can compare practices and behaviors among institutions, and this creates great value that is immediately translated into benefit to users and their patients.
Bashe: As the adage goes: “Data in-data out.” There is information and application of these databases. How does this look on the physician side? How will they know there is a possible medication risk or drug-drug interaction for one of their patients?
Stein: It’s split into two scenarios. The first is having an alert generated at the moment of prescribing. That’s a very straightforward, synchronous warning. The second scenario is an asynchronous alert generated after the patient has been receiving the medication. That could be when a new lab test or vital sign ties together information and recognizes that one of the medications the patient is taking is a danger.
Different institutions have different workflows — operational cultures — to ensure the clinician receives the alert in a timely manner. The way the alert information is shared aligns to the infrastructure of the hospital. The insights MedAware generates become part of the set workflow for the clinician and care team.
Bashe: It sounds like artificial intelligence can have a big impact on identifying drug-drug interactions or contraindications at any point in the physician’s workflow and patient journey. With impressive data from implementations already in place, what’s the next big development you’re working on?
Stein: The next level is expanding beyond indications, looking at issues that may have slipped through the diagnostic and primary care cracks.
Bashe: This is interesting. So, you’re looking to expand upon current AI-driven medication insights to identify other areas of risk that physicians may not yet be aware of?
Stein: Yes. For many years, physicians tried to be both memorizers and philosophers. That doesn’t work anymore. We have to let the computerized system be the memory and the physicians be the philosophers. At the end of the day, the value of the clinician is measured in that intimate encounter with the patient in the office, during that dialogue, in that physical checkup — really comprehending the problem and translating the patient’s words into a clinical understanding. What we are trying to provide are critical accessories that free clinicians to devote more time to really take care of their patients.
How Tech Supports the Hippocratic Oath
Bashe: Providers too often see the burden of managing health information falling on their shoulders. They ultimately shoulder the responsibility for patient wellbeing. EMR input, keeping track of the growing list of condition codes and navigating the challenges of interoperability become obstacles to their care mission. Machine learning, combined with the operational culture of an institution, the personal-care needs of each patient and the passion of the healer to do their best work, can bring about a long-needed change. Machine learning and physician expertise combined can do even better in fulfilling the Hippocratic pledge to “Do no harm.”