Editors Choice

Chatbots Can’t Be Trusted, and We Need Tools to Find Fact From Fiction in Them

Chatbots use AI, automatic rules, natural language processing (NLP), and machine learning (ML) to process data and answer questions. Bots that talk to people come in two main types: task-oriented (declarative) chatbots are programs that only do one thing, and virtual or digital helpers are another name for data-driven and predictive conversational chatbots. They are much smarter, more interactive, and more personalized than task-oriented chatbots.

About 1.5 billion people use chatbots, and most live in the United States, India, Germany, the United Kingdom, and Brazil. More and more people worldwide will be using chatbots in the future. By 2027, a quarter of businesses will probably use them as their primary way to contact customers. A gain of about $200 million a year is shown by this huge growth. We expect this number to hit $3 billion by the end of this decade, given its current compound annual growth rate (CAGR) of about 22%.

While chatbots are gaining importance in business and potentially healthcare, there are inherent problems that must be addressed. Ignoring issues in the chatbot can lead to biased or distorted information in chatbot algorithms. Training a chatbot is a challenging task that requires careful consideration and verification of the results. One of the shortcomings that must be overcome when creating a chatbot is the failure to carefully parse out anything that could indicate bias or failure on the part of the programmers to understand their own biases or shortcomings.

I have found with research print on chatbots that they have returned information with alleged articles and URLs that were nonexistent. If I had used them, my article would have contained many mistakes. Verifying any use of AI in medical and healthcare information searches is crucial.

The phone tree was the first chatbot. Customers found it frustrating to navigate through the computer menu to reach the automated customer service model. This model changed into pop-up, live chats on the screen as technology improved and AI, ML, and NLP became smarter. The process of change has kept going.

Although primarily aimed at business, chatbots, such as ChatGPT3, can be used for a variety of purposes, including academic research, personal interest, creative efforts, writing, or marketing. A chatbot can help with computer code, from improving it to writing code in different languages.

ChatGPT3 will allow you to prompt it to rewrite what it has provided to you and “apologize” if it has not met your expectations. Then, it will go on to provide another version of what you were seeking when you are more detailed in your prompt. The more detailed your prompt, the more likely it is that you will receive satisfactory information.

This can go on for many versions of your prompt until you are satisfied. It does not tire of attempting to satisfy your request. You can also request the number of words you wish the answer to your prompt to be in when you receive it.

Chatbots can also assist with identifying errors and generating various types of content. For example, they can summarize a play, book, or story, write a press release, write a lesson plan on a specific topic, develop a marketing plan, outline a research project or paper, and perform many other required productions.

One of the problems with research papers specifically, especially when the individual wants complete URLs for any research cited, is that the material does not exist at that computer address and, in fact, may not exist at all. The chatbot aims to follow the requested prompt, and that’s one of their faultsChatbots excel at creating fake titles or information for non-existent research articles and advertisements, and without fact-checking, they can deceive instead of providing accurate information.

While trying to please, AI chatbots can create extremely problematic situations. Take, for example, a recent interaction regarding a chatbot and elections. The GPT-4 and Google’s Gemini chatbots were trained on huge amounts of text from the internet and ready to give AI-generated answers. However, they found that the chatbots often gave voters wrong information or told them to go to polling places that did not exist 50% of the time. They also advised voters to stay home and not vote.

Remember, if you’re not using the latest version of the chatbots, they won’t have the most current information. For example, ChatGPT3 does not provide information after 2020, so it will tell you it can’t do that if you want current information. To get current information, you must subscribe to the more current version of it. Of course, ChatGPT3 is free, which is an advantage to those who have to watch their money, but it cannot do it if you need accurate 2024 information.

Too many chatbot answers are made up, and a new tool to discover the false answers was needed. A company called Vectara, which was started by former Google workers, found that chatbots make up facts at least 3% of the time.

Cleanlab is an AI company that started as a part of MIT’s quantum computing lab. They developed a new tool in 2021 that helps people understand the reliability of these models. It found errors in 10 commonly used data sets for teaching machine-learning algorithms. Data scientists may mistakenly believe that all future answers from big language models will be accurate based on a few correct responses.

Another problem, of course, is that AI has made it possible for fake people to be created on the Internet. Trolls and bots make it harder to learn online by misleading and causing skepticism about reliable information and people.

The future of AI has great promise, but it also requires careful consideration and a degree of concern that we may not have attributed to it in the past.

Pat Farrell PhD

I'm a licensed psychologist in NJ/FL and have been in the field for over 30 years serving in most areas of mental health, psychiatry research, consulting, teaching (post-grad), private practice, consultant to WebMD and writing self-help books. Currently, I am concentrating on writing articles and books.

Recent Posts

Mammogram Myth Busters: Too Many or Too Few? New Guidelines Explained

For three decades, the yearly mammogram breast cancer screening, but a recent guideline update has…

2 days ago

Yes, You Can Change That Medical Consent Form

Consent forms are a usual part of many businesses, and in medicine, they are standard…

3 days ago

LLM Cancer Mentor “Dave AI” Offers WAZE-like 24/7 Personalized Support, Making it a Game-Changer in Patient Care

Eliran Malki and Belong.Life Help People Navigate Cancer Care with an On-Demand AI Coach with…

1 week ago

Seven Habits of a Highly Effective Health-System CFO

Health system CFOs across the country face a unique quandary. Operating margins remain low, staffing…

1 week ago

The Rise of Consumer Health in Shaping Southeast Asia Treatment Accessibility

The Perfect Storm: Challenges Driving Consumer Health Demand in APAC

1 week ago

Digital Health AI and Innovation Summit to Gather in Boston this Fall

The digital health sector has its ups and downs! Some judge its success by investment…

1 week ago

This website uses cookies. Your continued use of the site is subject to the acceptance of these cookies. Please refer to our Privacy Policy for more information.

Read More