<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	>

<channel>
	<title>Chatbots - Medika Life</title>
	<atom:link href="https://medika.life/tag/chatbots/feed/" rel="self" type="application/rss+xml" />
	<link>https://medika.life/tag/chatbots/</link>
	<description>Make Informed decisions about your Health</description>
	<lastBuildDate>Sun, 28 Apr 2024 16:42:21 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.5.5</generator>

 
<site xmlns="com-wordpress:feed-additions:1">180099625</site>	<item>
		<title>Chatbots Can’t Be Trusted, and We Need Tools to Find Fact From Fiction in Them</title>
		<link>https://medika.life/chatbots-cant-be-trusted-and-we-need-tools-to-find-fact-from-fiction-in-them/</link>
		
		<dc:creator><![CDATA[Pat Farrell PhD]]></dc:creator>
		<pubDate>Sun, 28 Apr 2024 16:42:18 +0000</pubDate>
				<category><![CDATA[AI Chat GPT GenAI]]></category>
		<category><![CDATA[Digital Health]]></category>
		<category><![CDATA[Editors Choice]]></category>
		<category><![CDATA[Chatbots]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[mental health]]></category>
		<category><![CDATA[Misinformation]]></category>
		<category><![CDATA[Patricia Farrell]]></category>
		<category><![CDATA[Search]]></category>
		<guid isPermaLink="false">https://medika.life/?p=19648</guid>

					<description><![CDATA[<p>AI has impacted the lives of everyone around the globe, but we can’t trust its chatbots because they make things up and spread disinformation.</p>
<p>The post <a href="https://medika.life/chatbots-cant-be-trusted-and-we-need-tools-to-find-fact-from-fiction-in-them/">Chatbots Can’t Be Trusted, and We Need Tools to Find Fact From Fiction in Them</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p id="bf9b">Chatbots use AI, automatic rules, natural language processing (NLP), and machine learning (ML) to process data and answer questions. Bots that talk to people come in two main types:&nbsp;<strong>task-oriented</strong>&nbsp;(declarative) chatbots are programs that only do one thing, and&nbsp;<strong>virtual or digital helpers</strong>&nbsp;are another name for data-driven and predictive conversational chatbots. They are&nbsp;<em>much smarter, more interactive, and more personalized</em>&nbsp;than task-oriented chatbots.</p>



<p id="ee10"><a href="https://www.tidio.com/blog/chatbot-statistics/" rel="noreferrer noopener" target="_blank">About 1.5 billion people use chatbots</a>, and most live in the United States, India, Germany, the United Kingdom, and Brazil. More and more people worldwide will be using chatbots in the future. By 2027, a quarter of businesses will probably use them as their&nbsp;<em>primary way to contact customers</em>. A gain of about&nbsp;<strong><em>$200 million a year</em></strong>&nbsp;is shown by this huge growth. We expect this number to hit&nbsp;<strong>$3 billion by the end of this decade</strong>, given its current compound annual growth rate (CAGR) of about 22%.</p>



<p id="4cab">While chatbots are gaining importance in business and potentially healthcare, there are inherent problems that must be addressed. Ignoring issues in the chatbot can lead to&nbsp;<em>biased or distorted information</em>&nbsp;in chatbot algorithms.&nbsp;<strong>Training a chatbot is a challenging task</strong>&nbsp;that requires careful consideration and verification of the results. One of the shortcomings that must be overcome when creating a chatbot is the failure to carefully parse out anything that&nbsp;<em>could indicate bias&nbsp;</em>or failure on the part of the&nbsp;<em>programmers to understand their own biases or shortcomings</em>.</p>



<p id="0d98">I have found with research print on chatbots that they have returned information with alleged articles and URLs that were nonexistent. If I had used them, my article would have contained many mistakes. <strong>Verifying any use of AI in medical and healthcare information searches is crucial</strong>.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="What is ChatGPT and How You Can Use It" width="696" height="392" src="https://www.youtube.com/embed/40Kp_fa8vIw?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<p id="ae48">The&nbsp;<em>phone tree was the first chatbot</em>. Customers found it frustrating to navigate through the computer menu to reach the automated customer service model. This model changed into&nbsp;<em>pop-up, live chats on the screen</em>&nbsp;as technology improved and AI, ML, and NLP became smarter. The process of change has kept going.</p>



<p id="25a2">Although primarily aimed at business, chatbots, such as&nbsp;<a href="https://chat.openai.com/g/g-PFPunh5Xx-chad-gpt3" rel="noreferrer noopener" target="_blank">ChatGPT3</a>, can be used for a variety of purposes, including academic research, personal interest, creative efforts, writing, or marketing. A chatbot can help with computer code, from improving it to writing code in different languages.</p>



<p id="326f">ChatGPT3 will allow you to prompt it to rewrite what it has provided to you and &#8220;apologize&#8221; if&nbsp;<em>it has not met your expectations</em>. Then, it will go on to provide another version of what you were seeking when you are more detailed in your prompt. The more detailed your prompt, the more likely it is that you will receive satisfactory information.</p>



<p id="bfd5">This can go on for many versions of your prompt until you are satisfied. It does not tire of attempting to satisfy your request. You can also&nbsp;<em>request the number of words you wish</em>&nbsp;the answer to your prompt to be in when you receive it.</p>



<p id="aa23">Chatbots can also assist with identifying errors and generating various types of content. For example, they can summarize a play, book, or story, write a press release, write a lesson plan on a specific topic, develop a marketing plan, outline a research project or paper, and perform many other required productions.</p>



<p id="8659"><strong>One of the problems with research papers specifically,&nbsp;</strong>especially when the individual wants complete URLs for any research cited, is that&nbsp;<strong>the material does not exist</strong>&nbsp;at that computer address and, in fact,&nbsp;<strong>may not exist at all.</strong>&nbsp;The chatbot aims to follow the requested prompt, and that&#8217;s&nbsp;<em>one of their faults</em>.&nbsp;<strong>Chatbots&nbsp;</strong><a href="https://dkb.blog/p/bing-ai-cant-be-trusted" rel="noreferrer noopener" target="_blank"><strong>excel at&nbsp;<em>creating fake</em></strong></a><strong><em>&nbsp;titles or information&nbsp;</em></strong><em>for non-existent research articles and advertisements</em>, and without fact-checking, they can deceive instead of providing accurate information.</p>



<p id="17dc">While trying to please, AI chatbots can create extremely problematic situations. Take, for example, a recent&nbsp;<a href="https://www.fastcompany.com/91014825/deep-fake-biden-robo-call-election-new-hampshire-ai" rel="noreferrer noopener" target="_blank">interaction regarding a chatbot and elections.</a>&nbsp;The GPT-4 and Google’s Gemini chatbots were trained on huge amounts of text from the internet and ready to give AI-generated answers. However, they found that the chatbots often gave voters wrong information or&nbsp;<strong>told them to go to polling places that&nbsp;</strong><a href="https://www.voanews.com/a/ai-chatbots-provide-false-information-about-november-elections/7509355.html" rel="noreferrer noopener" target="_blank"><strong>did not exist 50% of the time</strong></a>. They also advised voters to stay home and not vote.</p>



<p id="a8d4">Remember, if you&#8217;re not using the latest version of the chatbots, they won&#8217;t have the most current information. For example, ChatGPT3 does not provide information after 2020, so it will tell you it can&#8217;t do that if you want current information. To get current information, you must subscribe to the more current version of it. Of course, ChatGPT3 is free, which is an advantage to those who have to watch their money, but it cannot do it if you need accurate 2024 information.</p>



<p id="e1ea">Too many chatbot answers are made up, and&nbsp;<a href="https://www.technologyreview.com/2024/04/25/1091835/chatbot-hallucination-new-tool-trustworthy-language-model/?truid=712bf8bdd2d350eceef044aa8eda8241&amp;utm_source=the_download&amp;utm_medium=email&amp;utm_campaign=the_download.unpaid.engagement&amp;utm_term=Active+Qualified&amp;utm_content=04-26-2024&amp;mc_cid=5214fe5248&amp;mc_eid=fa18a54ee6" rel="noreferrer noopener" target="_blank">a new tool to discover the false answers</a>&nbsp;was needed. A company called Vectara, which was started by former Google workers, found that&nbsp;<strong>chatbots make up facts at least 3% of the time.</strong></p>



<p id="8152">Cleanlab is an AI company that started as a part of MIT&#8217;s quantum computing lab. They developed a new tool in 2021 that helps people understand the reliability of these models.&nbsp;<em>It found errors in 10 commonly used data sets for teaching machine-learning algorithms</em>. Data scientists may&nbsp;<strong>mistakenly believe that all future answers&nbsp;</strong>from big language models&nbsp;<strong>will be accurate</strong>&nbsp;based on a few correct responses.</p>



<p id="f381"><a href="https://link.springer.com/article/10.1007/s13347-023-00640-9" rel="noreferrer noopener" target="_blank">Another problem</a>, of course, is that AI has made it possible for fake people to be created on the Internet. Trolls and bots make it harder to learn online by&nbsp;<em>misleading and causing skepticism about reliable information and people.</em></p>



<p id="ea7d">The future of AI has great promise, but it also requires careful consideration and a degree of concern that we may not have attributed to it in the past.</p>
<p>The post <a href="https://medika.life/chatbots-cant-be-trusted-and-we-need-tools-to-find-fact-from-fiction-in-them/">Chatbots Can’t Be Trusted, and We Need Tools to Find Fact From Fiction in Them</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">19648</post-id>	</item>
		<item>
		<title>The Vital Reflection: How Large Language Models Hold a Mirror to Humanity</title>
		<link>https://medika.life/the-vital-reflection-how-large-language-models-hold-a-mirror-to-humanity/</link>
		
		<dc:creator><![CDATA[John Nosta]]></dc:creator>
		<pubDate>Fri, 17 Nov 2023 19:08:26 +0000</pubDate>
				<category><![CDATA[Digital Health]]></category>
		<category><![CDATA[Editors Choice]]></category>
		<category><![CDATA[General Health]]></category>
		<category><![CDATA[Mental Health]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[CHAT GPT]]></category>
		<category><![CDATA[Chatbots]]></category>
		<category><![CDATA[Cognition]]></category>
		<category><![CDATA[GPT]]></category>
		<category><![CDATA[John Nosta]]></category>
		<category><![CDATA[LLMs]]></category>
		<guid isPermaLink="false">https://medika.life/?p=19007</guid>

					<description><![CDATA[<p>Large Language Models have become more than tech achievements; they are vital reflections of humanity.</p>
<p>The post <a href="https://medika.life/the-vital-reflection-how-large-language-models-hold-a-mirror-to-humanity/">The Vital Reflection: How Large Language Models Hold a Mirror to Humanity</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>In the age of AI, one of the most remarkable inventions has been the emergence of Large Language Models like GPT. These models have rapidly become more than mere technological achievements; they are vital reflections of humanity itself. Just as holding a mirror to someone&#8217;s breath reveals the invisible signs of life, LLMs reflect the vast, often hidden complexities of human cognition, culture, and consciousness.</p>



<h2 class="wp-block-heading"><strong>Humanity&#8217;s Cognitive Corpus as the Foundation</strong></h2>



<p>At their core, LLMs are constructed from the collective <a href="https://www.psychologytoday.com/intl/blog/the-digital-self/202311/the-corpus-cognitionis-humanae">cognitive corpus</a> of humanity. They are trained on extensive databases that encompass a wide array of human knowledge and expression, from literature and science to mundane conversations and esoteric debates. This training enables them to generate responses that are startlingly human-like, not just in the accuracy of the information provided but in the tone, style, and even creativity of their output.</p>



<h2 class="wp-block-heading"><strong>A Mirror to Our Collective Mind</strong></h2>



<p>LLMs serve as a mirror, allowing us to see a reflection of our collective mind. In their responses, we find echoes of our thoughts, beliefs, biases, and aspirations. This reflection is not just a replication of what they have been fed; it is a recombination, a new synthesis of the myriad elements that make up human expression. In this way, LLMs can offer new insights, challenge established ideas, and even push the boundaries of creativity.</p>



<h2 class="wp-block-heading"><strong>An Ethical and Philosophical Reflection</strong></h2>



<p>This mirroring raises critical ethical and philosophical questions. As we interact with LLMs, we must consider what it means for a machine to reflect our intelligence and creativity. How do we handle the biases inherent in the data they are trained on? What responsibilities do we have when these models echo back not just our wisdom but also our follies and prejudices? The way we answer these questions will shape not just the development of AI but our understanding of ourselves.</p>



<h2 class="wp-block-heading"><strong>A Tool for Self-Reflection and Growth</strong></h2>



<p>LLMs can also be a tool for self-reflection and growth. By interacting with these models, we can gain a clearer view of our collective intellect and identity. They can help us identify gaps in our knowledge, inconsistencies in our thinking, and areas where our biases influence our judgment. This can be an invaluable resource in education, policy-making, and personal development.</p>



<h2 class="wp-block-heading"><strong>The Future of Human-AI Interaction</strong></h2>



<p>Looking ahead, the relationship between humans and LLMs will likely evolve in fascinating ways. These models could become collaborative partners in creative endeavors, problem-solving, and exploring new frontiers of knowledge. The potential for these interactions is vast, limited only by our imagination and the ethical frameworks we build around AI.</p>



<p>Large Language Models like GPT are not just technological wonders; they are vital reflections of humanity. They hold up a mirror to our collective intellect, revealing both the brilliance and flaws inherent in our nature. As we move forward, it is essential to approach these models with a sense of responsibility and introspection, recognizing their potential to both mirror and shape our understanding of what it means to be human.</p>
<p>The post <a href="https://medika.life/the-vital-reflection-how-large-language-models-hold-a-mirror-to-humanity/">The Vital Reflection: How Large Language Models Hold a Mirror to Humanity</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">19007</post-id>	</item>
		<item>
		<title>Does Artificial Intelligence (#AI) Chatbot Outperform Physicians in Patient Experience?</title>
		<link>https://medika.life/does-artificial-intelligence-ai-chatbot-outperform-physicians-in-patient-experience/</link>
		
		<dc:creator><![CDATA[Gil Bashe, Medika Life Editor]]></dc:creator>
		<pubDate>Thu, 18 May 2023 13:02:14 +0000</pubDate>
				<category><![CDATA[Digital Health]]></category>
		<category><![CDATA[Digital Innovation]]></category>
		<category><![CDATA[Diseases]]></category>
		<category><![CDATA[Editors Choice]]></category>
		<category><![CDATA[For Doctors]]></category>
		<category><![CDATA[General Health]]></category>
		<category><![CDATA[Health Insurance]]></category>
		<category><![CDATA[Policy and Practice]]></category>
		<category><![CDATA[Public Health]]></category>
		<category><![CDATA[Software]]></category>
		<category><![CDATA[Software and Apps]]></category>
		<category><![CDATA[TeleHealth]]></category>
		<category><![CDATA[CHAT GPT]]></category>
		<category><![CDATA[Chatbots]]></category>
		<category><![CDATA[Gil Bashe]]></category>
		<category><![CDATA[JAMA]]></category>
		<category><![CDATA[physicians]]></category>
		<guid isPermaLink="false">https://medika.life/?p=18185</guid>

					<description><![CDATA[<p>JAMA Article Draws Fire for Its Research Biases on ChatGPT and Chatbot - But Should We Ignore Its Conclusions Altogether?</p>
<p>The post <a href="https://medika.life/does-artificial-intelligence-ai-chatbot-outperform-physicians-in-patient-experience/">Does Artificial Intelligence (#AI) Chatbot Outperform Physicians in Patient Experience?</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>A recent&nbsp;<a href="https://today.ucsd.edu/story/study-finds-chatgpt-outperforms-physicians-in-high-quality-empathetic-answers-to-patient-questions">Journal of the American Medical Association (JAMA) study</a>&nbsp;(summary hyper-linked) found that&nbsp;<a href="https://www.linkedin.com/feed/hashtag/chatgpt">#ChatGPT</a>&nbsp;outperforms physicians in counseling patients. The&nbsp;<a href="https://jamanetwork.com/journals/jamainternalmedicine/fullarticle/2804309?guestAccessKey=6d6e7fbf-54c1-49fc-8f5e-ae7ad3e02231&amp;utm_source=For_The_Media&amp;utm_medium=referral&amp;utm_campaign=ftm_links&amp;utm_content=tfl&amp;utm_term=042823">complete research</a>&nbsp;compares written responses from physicians and ChatGPT to real-world health patient-directed questions. It&#8217;s rocked quite a few boats in the medical community. Some within that community are threatened, and others are reflective.</p>



<p>A panel of licensed healthcare professionals preferred ChatGPT responses 79% of the time and rated ChatGPT responses as higher quality and more empathetic.&nbsp;Gulp. Understandably, some doctors are not happy with this study. And many were not pleased with me for not diving deeper into the complexities inherent in the research in my initial LinkedIn post. Message heard. Understood!</p>



<p>The news headlines and the initial study callouts overplay the immediate importance of ChatGPT in the physician-patient relationship. Physicians do not fair poorly.&nbsp; However, the authors provide an inflection point that should not be ignored and must be acknowledged –&nbsp;<em>Communication is Part of the Care and Cure</em>! Physicians must be trained and have time to deal with patient curiosity and urgencies.&nbsp;<a href="https://www.linkedin.com/feed/hashtag/patientexperience">#Patientexperience</a>&nbsp;is different. They do not want to sit idle or silent. They are curious and concerned.</p>



<figure class="wp-block-image size-large"><img fetchpriority="high" decoding="async" width="696" height="427" src="https://i0.wp.com/medika.life/wp-content/uploads/2023/05/image.png?resize=696%2C427&#038;ssl=1" alt="" class="wp-image-18186" srcset="https://i0.wp.com/medika.life/wp-content/uploads/2023/05/image.png?resize=1024%2C628&amp;ssl=1 1024w, https://i0.wp.com/medika.life/wp-content/uploads/2023/05/image.png?resize=300%2C184&amp;ssl=1 300w, https://i0.wp.com/medika.life/wp-content/uploads/2023/05/image.png?resize=768%2C471&amp;ssl=1 768w, https://i0.wp.com/medika.life/wp-content/uploads/2023/05/image.png?resize=150%2C92&amp;ssl=1 150w, https://i0.wp.com/medika.life/wp-content/uploads/2023/05/image.png?resize=696%2C427&amp;ssl=1 696w, https://i0.wp.com/medika.life/wp-content/uploads/2023/05/image.png?resize=1068%2C655&amp;ssl=1 1068w, https://i0.wp.com/medika.life/wp-content/uploads/2023/05/image.png?w=1488&amp;ssl=1 1488w, https://i0.wp.com/medika.life/wp-content/uploads/2023/05/image.png?w=1392&amp;ssl=1 1392w" sizes="(max-width: 696px) 100vw, 696px" data-recalc-dims="1" /><figcaption>&#8220;Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media&#8221; Forum Appearing in JAMA. Authored by John W. Ayers, PhD, MA1,2; Adam Poliak, PhD3; Mark Dredze, PhD4; et al</figcaption></figure>



<p>As generations have become more familiar with technology in their day-to-day lives, perhaps they place more trust in machines&#8217; “unbiased nature” over humans. That assumption has led to the rise of &#8220;<a href="https://www.linkedin.com/feed/hashtag/misinformation">#misinformation</a>.&#8221; We believe our Twitter feeds if we don&#8217;t explore the facts further. But, our screens reduce the press of needing to engage with people at the moment &#8211; they give us time to think and check in with this &#8220;on-call&#8221; information aggregator. To let the information sink in without being confronted about the next step. Doctors are too often pressured into an eight-minute per-patient provider reimbursement model. It&#8217;s not their fault &#8211; it&#8217;s the system that they must co-exist within. But that tilted system leads to the consumer seeking &#8211; needing &#8211; alternatives. If so, even imperfect ChatGPT4 and beyond will be a go-to.</p>



<p>There are changes afoot that we need to make happen sooner rather than later by moving minds, systems, and behaviors so that life-sustaining and life-saving approaches to patient care may eventually tip the scale of human survival toward health and wellness. However, we see data from a human perspective – sometimes self-interests or emotional needs for control. ChatGPT is the aggregate of data and human input. It is not divorced from us but a faint mirror of the human experience.</p>



<p>Yes, this study is worth reading.&nbsp;Yes, many have criticized its design and the intent of the authors.&nbsp;Yes, many are fearful that machines may replace physicians. But, the latter assumption is doubtful. Reading between the lines reinforces that, as industry colleague&nbsp;<a href="https://www.linkedin.com/in/riteshpatel?miniProfileUrn=urn%3Ali%3Afs_miniProfile%3AACoAAAABem0B2SG6vfjkj8ZbUw-MIarsTYQB1xE">Ritesh Patel</a>&nbsp;often says,&nbsp;<em>“If it moves, digitize it!”&nbsp;</em>People get their information in ways that are quick and convenient. That is a reality everyone in the health community must face!</p>



<p>The medical community and health communicators must rise to the moment if they want to harness this technology.&nbsp;Learn about ChatGPT and how it operates &#8211; its prompts. Also, read words from experts on the digital health news platform&nbsp;<em><a href="https://medika.life/is-gpt-digital-healths-inflection-point/">Medika Life</a></em><em>&nbsp;</em>including the insightful words by innovation theorist&nbsp;<a href="https://www.linkedin.com/in/johnnosta?miniProfileUrn=urn%3Ali%3Afs_miniProfile%3AACoAAAF4ZrIB71KyhWiZP7iSK431GX-NykowjSs"><strong>John Nosta</strong></a>.&nbsp;John will rock your boat; however, often, he points to where this is going.&nbsp; Read the words of&nbsp;<a href="https://www.linkedin.com/in/tomlawry?miniProfileUrn=urn%3Ali%3Afs_miniProfile%3AACoAAAF0i4IB54VXMTlOIBrwZOsyJqrosCj3M70">Tom Lawry</a>, former head of Microsoft&#8217;s AI team, author of the best-seller&nbsp;<em><a href="https://www.amazon.com/Hacking-Healthcare-Intelligence-Revolution-Reboot/dp/1032260157">Hacking Healthcare</a></em>,<em>&nbsp;</em>and a global counselor on the practical application of AI.</p>



<p>Almost one year ago, I penned a piece titled:&nbsp;<em><a href="https://medika.life/10-health-possibilities-we-cant-afford-to-block/">Health Possibilities We Cannot Afford to Block.</a>&nbsp;</em>There were 10 ideas/technologies included in that piece &#8211; #1 was&nbsp;<a href="https://www.linkedin.com/feed/hashtag/ai">#AI</a>. That&#8217;s the heart of ChatGPT. Fixing one part of the healthcare puzzle is encouraging &#8211; but is it transformational? What can we do to make things work better for patients? Medicine can harness the power of ChatGPT to make it work even better for patients seeking healing solutions.&nbsp;Perhaps we can give physicians more time to help patients feel their doctors have and always are among their greatest advocates. We can also bring technology companies and leading medical associations together to talk about ChatGPT influence on trusted people-to-people connections, particularly with physician-patients.</p>



<p>Why do consumers turn to machines instead of people for medical counsel?&nbsp; Well, we haven&#8217;t been able to clone or at least develop teaching models drawing upon the many outstanding physicians who demonstrate incredible patience and empathy for patient woes and questions &#8211; doctors like WebMD&#8217;s&nbsp;<a href="https://www.linkedin.com/in/drjohnwhyte?miniProfileUrn=urn%3Ali%3Afs_miniProfile%3AACoAAAcT9AABHarYovqnQB5NILPLEzy_5O6FT3A">John Whyte</a>&nbsp;and NHS&#8217;s and Microsoft&#8217;s&nbsp;<a href="https://www.linkedin.com/in/junaidbajwa?miniProfileUrn=urn%3Ali%3Afs_miniProfile%3AACoAAATbEIgBrrHc7r6m68qdrd5GoYhvq_svfx8">Junaid Bajwa</a>—many answers to consider. Among the most important are skill, collaboration and empathy.</p>



<p>Consumers may feel that devices are better listeners and work with them in partnership.&nbsp;We should expect this outcome due to the fragmented health ecosystem that consumers must navigate with difficulty.&nbsp;We must recognize that ChatGPT&#8217;s interest and popularity among health information seekers didn&#8217;t just happen. It is possible to realize that these same information seekers feel they are not getting what they seek.</p>



<p>Keep learning!&nbsp;This is not the end of humanity and the beginning of the Matrix &#8211; where people, software and machine battle for survival. The world will be changing in amazing ways in the short years ahead. Collaboration and communications go hand-in-hand as essential tools for healing.</p>
<p>The post <a href="https://medika.life/does-artificial-intelligence-ai-chatbot-outperform-physicians-in-patient-experience/">Does Artificial Intelligence (#AI) Chatbot Outperform Physicians in Patient Experience?</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">18185</post-id>	</item>
		<item>
		<title>Chatbots Are Coming for MH, and There Are Many Sides to This New Therapy Tool</title>
		<link>https://medika.life/chatbots-are-coming-for-mh-and-there-are-many-sides-to-this-new-therapy-tool/</link>
		
		<dc:creator><![CDATA[Pat Farrell PhD]]></dc:creator>
		<pubDate>Wed, 01 Feb 2023 06:43:47 +0000</pubDate>
				<category><![CDATA[Alternate Health]]></category>
		<category><![CDATA[Anxiety and Depression]]></category>
		<category><![CDATA[Digital Health]]></category>
		<category><![CDATA[Disorders and Conditions]]></category>
		<category><![CDATA[Editors Choice]]></category>
		<category><![CDATA[For Practitioners]]></category>
		<category><![CDATA[General Health]]></category>
		<category><![CDATA[Habits for Healthy Minds]]></category>
		<category><![CDATA[Mental Health]]></category>
		<category><![CDATA[CHAT]]></category>
		<category><![CDATA[Chatbots]]></category>
		<category><![CDATA[Patricia Farrell]]></category>
		<guid isPermaLink="false">https://medika.life/?p=17550</guid>

					<description><![CDATA[<p>Chatbots may appear to address the dearth of mental health services in rural areas or for those with limited income, but with this digital advance come untold, possibly unrealized, difficulties and even dangers.</p>
<p>The post <a href="https://medika.life/chatbots-are-coming-for-mh-and-there-are-many-sides-to-this-new-therapy-tool/">Chatbots Are Coming for MH, and There Are Many Sides to This New Therapy Tool</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p id="c0fd">Mental health&nbsp;<a href="https://en.wikipedia.org/wiki/Chatbot" rel="noreferrer noopener" target="_blank">chatbots,</a>&nbsp;virtual therapists, or digital mental health tools, are computer programs designed to provide psychological support and guidance to individuals experiencing mental health challenges. They can use these chatbots on various devices, such as computers, smartphones, and tablets. They can provide various services, such as counseling, therapy, and self-help tools.</p>



<p id="e438">In recent years,&nbsp;<a href="https://psychnews.psychiatryonline.org/doi/10.1176/appi.pn.2022.05.4.50" rel="noreferrer noopener" target="_blank">mental health chatbots</a>&nbsp;have become more popular for helping people who don&#8217;t have access to traditional mental health services, like those who live in remote or underserved areas.</p>



<p id="b258">One of the main&nbsp;<a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7385637/" rel="noreferrer noopener" target="_blank">advantages of mental health chatbots is their accessibilit</a>y. Traditional mental health services, like in-person therapy, can be hard for many people because they are expensive, hard to get to or have a bad reputation.</p>



<p id="694c">They can use mental health chatbots from anywhere with an internet connection, and are often free or cheap. This makes them a good choice for people who can&#8217;t afford or don&#8217;t have access to traditional mental health services in rural areas or can&#8217;t pay for therapy.</p>



<p id="a4d3">Another advantage of mental health&nbsp;<a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6610568/" rel="noreferrer noopener" target="_blank">chatbots is their anonymity</a>. Due to stigma or fear of being judged, many people may be afraid to get help with mental health problems. Chatbots can provide a sense of anonymity, as individuals can communicate with the chatbot in the privacy of their own homes, without fear of being judged or stigmatized. This can make it easier for individuals to seek help and can help to reduce the barriers to accessing mental health services.</p>



<p id="c73b">Chatbots can also help with mental health issues&nbsp;<a href="https://www.utoledo.edu/offices/rocketwellness/docs/Tess%20Program.pdf" rel="noreferrer noopener" target="_blank">24 hours a day, seven days</a>&nbsp;a week. This is especially helpful for people going through a mental health crisis. Traditional mental health services, such as in-person therapy, may not be available at all times, and may not provide the level of support that individuals may need during a crisis.</p>



<p id="71de">On the other hand, mental health chatbots can offer&nbsp;<em>support and advice around the clock</em>, which can be very important for people who may go through a mental health crisis. And it is also there for people who work on shifts that prohibit their leaving for a therapy session.</p>



<p id="7438">However, some potential problems with mental health chatbots should be considered. One issue is the&nbsp;<em>lack of human interaction and empathy</em>. AI algorithms are only as good as the information their programmers give them, and programmers may have biases.</p>



<p id="1ae1">Mental health chatbots are designed to give&nbsp;<em>pre-programmed answers</em>&nbsp;to specific problems. Still, they&nbsp;<em>may not provide the same empathy</em>&nbsp;and understanding as a human therapist. This can be a problem for people&nbsp;<a href="https://www.apaservices.org/practice/business/technology/tech-column/mental-health-chatbots" rel="noreferrer noopener" target="_blank">looking for emotional support and validation</a>.</p>



<p id="803f">Another issue is the lack of regulation for mental health chatbots. Professional groups like the&nbsp;<a href="https://www.apaservices.org/practice/ce/state/state-info" rel="noreferrer noopener" target="_blank">American Psychological Association have rules</a>&nbsp;about how traditional mental health services are run, but&nbsp;<em>mental health chatbots don&#8217;t have to follow the same rules</em>. This can be a problem for people looking for professional help with mental health issues because the chatbot may not be able to give the same level of professional support.</p>



<p id="40d3">Another issue is the reliability of the information provided by mental health chatbots. Since these chatbots are not real people and only know how to respond to questions that have already been programmed, they&nbsp;<a href="https://www.npr.org/sections/health-shots/2023/01/19/1147081115/therapy-by-chatbot-the-promise-and-challenges-in-using-ai-for-mental-health" rel="noreferrer noopener" target="_blank">may&nbsp;<em>not be able to give accurate and reliable information</em></a>. This can be a problem because getting the wrong information can&nbsp;<em>worsen things</em>.</p>



<p id="e2c4">Also,&nbsp;<em>little research is available</em>&nbsp;on how well chatbots work for mental health.<br>While there have been a few reports on the effectiveness of chatbots in the field of mental health, there has been a dearth of large-scale, randomized controlled trials.</p>



<p id="b3ab">Are chatbots the answer for mental health issues? Like everything, some benefits and issues need to be resolved, and we cannot provide a definitive answer today. Yes, they can be useful, but we must use them, understanding that they may be incorrect or provide unhelpful responses.</p>
<p>The post <a href="https://medika.life/chatbots-are-coming-for-mh-and-there-are-many-sides-to-this-new-therapy-tool/">Chatbots Are Coming for MH, and There Are Many Sides to This New Therapy Tool</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">17550</post-id>	</item>
		<item>
		<title>When It Comes to Medicine – The Smartest Person in the Room Is the Patient!</title>
		<link>https://medika.life/when-it-comes-to-medicine-the-smartest-person-in-the-room-may-be-the-patient/</link>
		
		<dc:creator><![CDATA[Gil Bashe, Medika Life Editor]]></dc:creator>
		<pubDate>Fri, 20 May 2022 05:17:13 +0000</pubDate>
				<category><![CDATA[Digital Health]]></category>
		<category><![CDATA[Digital Innovation]]></category>
		<category><![CDATA[Diseases]]></category>
		<category><![CDATA[For Doctors]]></category>
		<category><![CDATA[General Health]]></category>
		<category><![CDATA[Healthcare Policy and Opinion]]></category>
		<category><![CDATA[Industry News]]></category>
		<category><![CDATA[Innovations]]></category>
		<category><![CDATA[Policy and Practice]]></category>
		<category><![CDATA[Software]]></category>
		<category><![CDATA[Access to Care]]></category>
		<category><![CDATA[Chatbots]]></category>
		<category><![CDATA[Conversational AI]]></category>
		<category><![CDATA[Greg Johnsen]]></category>
		<category><![CDATA[LifeLink Systems]]></category>
		<category><![CDATA[Patient Experience]]></category>
		<guid isPermaLink="false">https://medika.life/?p=15203</guid>

					<description><![CDATA[<p>Move over chatbots! It’s time to recognize that the smartest person in the room may not be the robot, rather patient experience creating a new type of dialogue through conversational AI. Conversational AI is a cutting-edge form of artificial intelligence enabling consumers to interact with computer applications the way they would with other humans.</p>
<p>The post <a href="https://medika.life/when-it-comes-to-medicine-the-smartest-person-in-the-room-may-be-the-patient/">When It Comes to Medicine – The Smartest Person in the Room Is the Patient!</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Move over chatbots! It’s time to recognize that the smartest person in the room may not be the robot, rather patient experience creating a new type of dialogue through conversational AI. <a href="https://en.wikipedia.org/wiki/Conversational_user_interface">Conversational AI</a> is a cutting-edge form of artificial intelligence enabling consumers to interact with computer applications the way they would with other humans.</p>



<p><a href="https://www.linkedin.com/in/gregjohnsen/">Greg Johnsen</a>, chief executive officer of <a href="https://www.lifelinksystems.com/">Lifelink Systems</a>, a technology company focusing on addressing people’s health concerns using conversational AI solutions, puts patients first. Greg is drawing on his more than 30 years of software, technology and health expertise to keep the patient at the center of innovation and the care conversation.  Medika Life spoke with Greg about his vision for putting technology to work to improve people’s health experience and access to care.</p>



<p class="has-text-align-center">&#8212;&#8212;-</p>



<p><em><strong>Gil Bashe: Tell me a bit of your vision, why this suite of technologies, and what problem did you want to leap into and solve for people who have pressing health needs?</strong></em></p>



<p><strong>Greg Johnsen</strong>: There&#8217;s an emerging class of technology that&#8217;s explicitly focused on patient and consumer experience. Conversational AI (artificial intelligence) has two sides; one side of that technology is all about consumers’ huge problems engaging with healthcare organizations in the digital space, whether it&#8217;s waiting for somebody online while on hold or trying to download a specific kind of app that is needed for this particular kind of visit and learning how to use it, finding it, and remembering to set up a username and a password, and not knowing how to do that.</p>



<p>The other side is that healthcare organizations have a massive problem delivering that kind of technology. It&#8217;s hard. It&#8217;s a complex technology and many of these organizations are very strapped for the resources and the capital to spin up these kinds of elegant modern, patient experience technology layers.&nbsp;</p>



<p><strong><em>Bashe:</em></strong><em> This nation&#8217;s health system is incredibly fragmented. Navigating between a physician, a payer, and a pharmacy. How do you begin to navigate such a fragmented system? How do you make those connections? And what&#8217;s your thought about making the pathway easier for the consumer?</em></p>



<p><strong>Johnsen</strong>: Some of it is the modality itself. Suppose you think about the things that frustrate consumers when dealing with or engaging digitally. Things like setting up an account and getting a username and password. These chunky sorts of frustrating things make the log-on process simple if you just took them and went horizontally. That would be a huge win or just making the download experience disappear.</p>



<p>However, the other side is just the healthcare system or the organization side. They don&#8217;t have the people and human brains at scale to have the conversations to connect in real-time with people. We have to find a way to make the rules, smarts, understanding referrals and prescription pickups and questions that a consumer might have about dosing and make those kinds of conversations asynchronous. They can happen anytime; they don&#8217;t have to be real-time with them and driven by artificial intelligence. I&#8217;ll say automated, and that&#8217;s key because there&#8217;s no way you&#8217;ll get the coverage to give immediate, easy, convenient access to consumers unless you find a way to multiply the workforce by 10, even 100 times.</p>



<p><strong><em>Bashe: When we deal with something so critical as our health needs, the process should be easy. Why is it so hard for patients who are, in theory, the customer of the system? Why is it that the system is the customer of itself?</em></strong></p>



<p><strong>Johnsen</strong>: Let’s face the music here a bit. That kind of experience is not new in the consumer world. There are industries where, whether it is entertainment, travel, finance, or banking, you are starting to see 24/7 available digital conversations. It can be as easy as chatting or texting with a friend. One of the barriers to the massive change in the patient experience is the healthcare organizations beginning to realize just how strategic and game-changing it can be.</p>



<p>The technology is around. It’s not like we have to spend a decade inventing the technology. There are companies, we are one of them, that have the infrastructure, tooling, services, know-how, and patterns to make big chunks of this work. Healthcare systems, health providers, and healthcare organizations have to get to where they see this as a strategic imperative. I believe that if you aren’t focusing your treasure and your strategic intent on creating this seamless digital experience for your consumer or patients, you are going to lose.</p>



<figure class="wp-block-image size-large is-resized"><img decoding="async" src="https://i0.wp.com/medika.life/wp-content/uploads/2022/05/Photo-Provided-by-LifeLink-Greg-Johnsen-Talks-Conversation-AI-with-LifeLink-Colleague-Greg-Kefer.jpg?resize=696%2C928&#038;ssl=1" alt="" class="wp-image-15205" width="696" height="928" srcset="https://i0.wp.com/medika.life/wp-content/uploads/2022/05/Photo-Provided-by-LifeLink-Greg-Johnsen-Talks-Conversation-AI-with-LifeLink-Colleague-Greg-Kefer-scaled.jpg?resize=768%2C1024&amp;ssl=1 768w, https://i0.wp.com/medika.life/wp-content/uploads/2022/05/Photo-Provided-by-LifeLink-Greg-Johnsen-Talks-Conversation-AI-with-LifeLink-Colleague-Greg-Kefer-scaled.jpg?resize=225%2C300&amp;ssl=1 225w, https://i0.wp.com/medika.life/wp-content/uploads/2022/05/Photo-Provided-by-LifeLink-Greg-Johnsen-Talks-Conversation-AI-with-LifeLink-Colleague-Greg-Kefer-scaled.jpg?resize=1152%2C1536&amp;ssl=1 1152w, https://i0.wp.com/medika.life/wp-content/uploads/2022/05/Photo-Provided-by-LifeLink-Greg-Johnsen-Talks-Conversation-AI-with-LifeLink-Colleague-Greg-Kefer-scaled.jpg?resize=1536%2C2048&amp;ssl=1 1536w, https://i0.wp.com/medika.life/wp-content/uploads/2022/05/Photo-Provided-by-LifeLink-Greg-Johnsen-Talks-Conversation-AI-with-LifeLink-Colleague-Greg-Kefer-scaled.jpg?resize=150%2C200&amp;ssl=1 150w, https://i0.wp.com/medika.life/wp-content/uploads/2022/05/Photo-Provided-by-LifeLink-Greg-Johnsen-Talks-Conversation-AI-with-LifeLink-Colleague-Greg-Kefer-scaled.jpg?resize=300%2C400&amp;ssl=1 300w, https://i0.wp.com/medika.life/wp-content/uploads/2022/05/Photo-Provided-by-LifeLink-Greg-Johnsen-Talks-Conversation-AI-with-LifeLink-Colleague-Greg-Kefer-scaled.jpg?resize=696%2C928&amp;ssl=1 696w, https://i0.wp.com/medika.life/wp-content/uploads/2022/05/Photo-Provided-by-LifeLink-Greg-Johnsen-Talks-Conversation-AI-with-LifeLink-Colleague-Greg-Kefer-scaled.jpg?resize=1068%2C1424&amp;ssl=1 1068w, https://i0.wp.com/medika.life/wp-content/uploads/2022/05/Photo-Provided-by-LifeLink-Greg-Johnsen-Talks-Conversation-AI-with-LifeLink-Colleague-Greg-Kefer-scaled.jpg?w=1920&amp;ssl=1 1920w, https://i0.wp.com/medika.life/wp-content/uploads/2022/05/Photo-Provided-by-LifeLink-Greg-Johnsen-Talks-Conversation-AI-with-LifeLink-Colleague-Greg-Kefer-scaled.jpg?w=1392&amp;ssl=1 1392w" sizes="(max-width: 696px) 100vw, 696px" data-recalc-dims="1" /><figcaption>LifeLink Systems CEO Greg Johnsen Talks with CMO Greg Kefer on the Role Conversational AI Can Play in Improved Patient Care</figcaption></figure>



<p><strong><em>Bashe: I have to say the health system is a system and a culture; that culture crushes innovation. The health system is resistant to change. You just said that all the technologies exist. Can you talk about some of the technologies you are harnessing to work with the more progressive-minded clients in your portfolio?&nbsp; What are the tools of technology that Lifelink Systems deploy that make a difference in this interaction between the consumer, their wellbeing, and the help and care and information they seek?&nbsp; &nbsp;</em></strong></p>



<p><strong>Johnsen</strong>: The first thing I’ll say about some of this infrastructure is that in our case, we made a significant commitment that the place to land to the consumer is the mobile phone. That supercomputer that we all have in our pockets, that we are on 3 to 5 hours a day. What we are doing on that thing is texting and chatting. The time a consumer spends on a messaging app is now more than they spend on a social network, so it’s language based.</p>



<p>The technology is language-based conversation engagement on a mobile phone with little to no friction in terms of how you get there. That technology is super helpful in pre-visit prep, appointment reminders, and intake cases. We are doing this at scale, and we are doing this with health care systems and life sciences companies. We have millions of patients who get a text message with a link they click on three days before their appointment, the conversation starts, and it can happen any time they are ready. Instead of getting to a clinic or waiting room where you sit and get a clipboard with papers that you fill out, all this can be done ahead of time virtually with digital assistance.</p>



<p>The mobile phone becomes the worker, and it’s not like the healthcare system gave the patient a bunch of work to do; they sent out a digital worker to help do the job. That technology is conversational, mobile, workflow-based and straightforward, meaning the system knows what’s next. All that knowledge must be in the system to feel like intelligence, but it delivers value to the consumer.</p>



<p><strong><em>Bashe: The whole vision around consumer experience with medicine is going through a bit of a shift. We are talking about diversity and inclusion. We are talking about senior disparities. We are talking about the future of drug development. Conversational AI has tremendous potential. Do you see conversational AI making remote participation in clinical trials more feasible? How do you see diversity and inclusion cascading? Even more concern for many is senior care. Where do you see this going in the future and making medicine friendlier and more accessible?</em></strong></p>



<p><strong>Johnsen</strong>: You just described another zone of medicine and healthcare where friction is the biggest enemy. You think about drug development and clinical trials, the tens of thousands of people that need to be relocated for a clinical trial not just for one moment, but for many moments over many years in some cases. One of the most telling statistics for clinical trials is the cost that manufacturers and CROs ensure between enrolling a subject into a clinical trial and that subject’s first office visit; sometimes, 50% or more drop out.</p>



<p>Somebody takes the time to find out about a clinical trial, fills an enrollment form, and nobody talks to them for the next 5, 7, or 10 days. It’s all about how you connect with the patient. How do you onboard them? How do you give them this modern experience to feel connected, held, graduated, and guided through this process. Today it’s primarily people, and it’s people calling on phones trying to catch these participants and find them. The answer is digital – mobile and conversational.</p>



<p>We see many opportunities for conversational AI and mobile phones in clinical trials, and in rare drug specialty patient services, hubs where connecting with the patient is critical.</p>



<p><strong><em>Bashe: During the high of the COVID era, many clinical trials collapsed. Neurology trials and oncology trials did continue but at a significantly reduced rate. Clinical sites have felt concerned or confused about decentralization and remote patient participation. The drug development system was based upon sites and contract research organizations having special agreements with sites. Does conversational AI again expand the influence of set sites to be set and remote? What’s your perspective on that in terms of decentralization and the future of hybrid?</em></strong></p>



<p><strong>Johnsen</strong>: Any interaction, conversation, or protocol that is defined and can be executed by a human is a candidate for a digital equivalent. That goes from scheduling to answering questions to informing consent and collecting data. All these interactions should have a shadow digital equivalent story. The degree to which a site takes advantage of this infrastructure or not depends on the protocol, timing, and sponsor; there are two giant gears at play here: humans and physical sites.</p>



<p>The digital shadow that can handle most of those conversations and those flows can go to the patients wherever that subject is, and then the in-between is humans going to the home. Nurses are deployed to do the blood draw or make it super convenient when the clinic comes to the home, but why not do the intake before the nurse gets to the home? Why not deliver the test result with a digital conversation after the nurse has been at the home? Hybrid is the right concept, but you need the right conversational infrastructure to properly adjust the various digital dials along the care spectrum.</p>



<p><strong><em>Bashe: Who sets the dial? Is it the site, is it the participant, is it mutual, or is it the trial&#8217;s sponsor?</em></strong></p>



<p><strong>Johnsen</strong>: I imagine that at the end of the day, the sponsor is driving and is on the hook, for ensuring that the clinical trial is executed correctly and that it does move through the approval process, and one of the significant inputs to that successful execution is a burden on the site, patient, speed, getting through there and getting high-quality relevant data. Not just the data on the clinical trial itself, but patient experience data so your subsequent clinical trial is better informed and more efficient. Sponsors need to go through their evolution because it&#8217;s about to get more complicated. The opportunity is enormous to get super-efficient and great at data collection so that you can transform clinical trials.</p>



<p><strong><em>Bashe: Many people think of chatbots or conversational AI as the perfection of the voice instead of the epitome of the interaction. When you say conversation, you also mean connection. Is that correct?</em></strong></p>



<p><strong>Johnsen</strong>: Once you know where you are headed, you can begin to test what kinds of interactions and dialogues are most effective for the patient in getting to that outcome. It is open, fascinating stuff, and all these little things that feel stylistic add up to make a difference, and we have seen these things make a difference.</p>



<p>We found out that it is more effective in digital conversion than giving somebody the complete form where you ask all the questions at once. People stay engaged in that dialogue form longer and complete the workflow, even if it takes longer than in a platform where they see everything at once. &nbsp;That flexibility is critical, and it’s hard to do with a portal, an app, or even by a human.</p>



<p><strong><em>Bashe: There seems to be a real differentiator here. The emphasis is on technology supporting the journey, not technology as the offer. Can you share a brief story about a customer saying, “I get the fact that the experience the consumer will have through this journey is a reflection on me; it speaks to what our brand and our corporate image is about?” Can you share any insight where a customer says, “we need this because this truly reflects who we aspire to be in the healthcare system?”</em></strong></p>



<p><strong>Johnsen</strong>: We have a customer in the specialty pharma space, and they are a leader. Every time a patient is prescribed one of these costly therapies somewhere along the way, the manufacturer will invest a fair amount to ensure that the consumer gets the support they need.</p>



<p>To get through all the financial hurdles, get to therapy, and start getting well again. This customer took our technology and gave it a persona that reflected the values of their call-center team, like another agent. They gave her a name. She is the one that meets the patient on their mobile phone when that patient gets prescribed medication, and she lets the patient know all the work she is going to do for them, all the work she can perform, what she is available for, she understands how they want to be engaged. I think that persona found a way to create the essence, and the language, and the feel of the service in the way they wanted.</p>



<p>That would be tricky in traditional digital approaches because the technology has to have enough skills to make adjustments. Still, also, she is evolving; she is getting new skills, she started doing a particular set of things, and now she is doing more things. She is front and center for the company because she exudes the brand, feeling, and personality.</p>



<p><strong><em>Bashe: Let’s look a little bit into the future, but not too far. Six to twelve months down the road, conversational AI is making such a significant headway, and we talked about some of the places where you and your community are helping the health system make those advances. We spoke about provider systems, specialty medication groups, decentralized clinical trials, and pharmaceutical industry sponsors and where this is going. What is the next segment, the next innovation advance you see down the road?</em></strong></p>



<p><strong>Johnsen</strong>: it is less about creating something new. There is a lot of innovation in tech available, and even today, the healthcare systems, pharma sponsors, and CROs, are just on the first pinning. We will see a new set of metrics arrive: what percentage of your total interactions with patients, customers and subjects are digital? How many digital conversations did you launch this year? And what you need to see is a universe in which you are getting a massive scale in which 80% of your interactions are digital, but your total number of interactions is 10 times what they were before.</p>



<p>We see innovation already. Healthcare systems, pharma companies, insurance, employers, and all these sectors that deal with consumers in healthcare need to amplify their digital conversation envelope by orders of magnitude.&nbsp;</p>



<p class="has-text-align-center">&#8212;&#8212;-</p>



<p><strong><em>Conversational AI is transforming many industries into customer friendly</em> <em>conversations and connections – travel, finance, consumer sales and finally health.&nbsp; Face it. It is still easier to order a pizza online than make a doctor’s appointment.&nbsp; Through conversational AI, Greg and LifeLink Systems are helping patients describe their symptoms through a series of questions meant to navigate around wait times obstacles.&nbsp; Will the health ecosystem suddenly become easier to engage?&nbsp; Will patients be directed to the right medical experts to resolve their concerns?&nbsp; That is exactly what Greg and his colleagues are working to achieve.</em></strong></p>
<p>The post <a href="https://medika.life/when-it-comes-to-medicine-the-smartest-person-in-the-room-may-be-the-patient/">When It Comes to Medicine – The Smartest Person in the Room Is the Patient!</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">15203</post-id>	</item>
	</channel>
</rss>
