In an era where Artificial Intelligence (AI) continues to reshape industries, Large Language Models (LLMs) and Generative AI (GenAI) stand out as the next big thing in tech.
They promise to revolutionize how businesses operate by enabling more natural human-machine interactions, automating complex tasks, and unlocking new efficiencies across industries. From enhancing customer service with AI-driven assistants to streamlining workflows in healthcare and finance, these technologies are reshaping the way organizations function.
However, even though consumers have been experimenting with tools like ChatGPT or DeepSeek, when it comes to Enterprise AI adoption, not everything goes. And if we focus specifically in healthcare, where clinical safety, regulatory compliance, and data integrity are non-negotiable, a fundamentally different approach is required.
At Tucuvi, we’ve been developing and deploying Conversational AI for healthcare for over five years. We work with large enterprise clients, including major healthcare systems like QuirónSalud Hospital Group—operating over 50 hospitals and serving millions of patients—as well as leading pharmaceutical companies like AstraZeneca.
In healthcare, AI applications demand the highest levels of safety, reliability, and clinical accuracy. While LLMs and GenAI have demonstrated impressive creative capabilities, they fall short in meeting these critical demands. Their inherent unpredictability and lack of deterministic control introduce significant risks in clinical settings, where precision and consistency are paramount.
That’s why the industry is focusing on Hybrid AI, a new standard that combines the adaptability of LLMs with the precision and control of traditional machine learning models. By orchestrating both through Intelligent AI systems, Hybrid AI enables high-performance conversational AI while adhering to the strictest clinical standards.
As highlighted in this recent article from Forbes, the blend of deterministic systems with generative models is proving to be the ideal strategy for risk-averse sectors like healthcare. Organizations don’t just look for innovation, they require AI solutions that meet the rigorous standards of clinical effectiveness and patient safety.
At the forefront of this revolution, Tucuvi harnesses the power of Hybrid AI to streamline clinical phone conversations with patients with an AI Agent named LOLA – an essential tool in addressing the global shortage of healthcare professionals. By blending LLM-based natural language capabilities with deterministic models, LOLA ensures clinically safe, context-aware, and reliable patient conversations, bridging the gap between AI innovation and real-world clinical needs.
Fully generative AI systems, those relying solely on LLMs have transformed conversational AI, yet they remain insufficient for healthcare applications. Techniques such as fine-tuning, Retrieval-Augmented Generation (RAG), and reinforcement learning from human feedback (RLHF) have significantly reduced hallucination rates. However, by their very nature, these systems can only reduce—not eliminate—the risk of errors. LLM hallucinate.
And when it comes to healthcare and patient safety, even a few errors are too many.
This limitation arises because LLMs are inherently probabilistic models that generate responses based on statistical patterns in their training data rather than true comprehension.
They do not 'understand' information as humans do; instead, they predict the most statistically probable next token. In patient care, where accuracy and reliability are paramount, even minor errors can compromise clinical decision-making.
That’s why the industry is exploring alternatives that prioritize safety and predictability, with leaders increasingly turning to Hybrid AI. Hybrid AI isn’t just an enhancement—it is, as of today, the only reliable approach to ensuring that AI-driven healthcare conversations are both effective and safe.
The key pillars that set Hybrid AI apart in healthcare are:
Let's share a real example.
Imagine LOLA AI Agent working in Transitional Care, making a follow-up phone call to a Heart Failure patient post-discharge.
The deterministic piece of the architecture will establish the scope of the conversation, what medical concepts need to be discussed and what are the acceptable AI responses clinically speaking.
Whereas the LLM piece of the architecture will make this interaction truly conversational and empathic, capturing patient input even if it’s out of the clinical scope (e.g: a patient sharing that his car is broken and has no transportation to attend a medical appointment) and answering as a human would do, with care and empathy.
With over five years in the market, Tucuvi offers a safe and clinically validated solution with best-in-class patient acceptance (over 90% patient engagement across more than 40 protocols in 10+ specialties). A solution that, in Europe, is certified as a Software As a Medical Device. In the last two years, we’ve been excitedly exploring how to expand the value provided to patients and care teams by incorporating LLMs—all while staying true to our core principles. This journey led us to implement the first Hybrid AI-based Clinical agent in the market: LOLA.
In order for LOLA to function at its full potential, a seamless coordination of multiple components is essential.
At the heart of Tucuvi’s AI system lies an AI Orchestrator, dynamically managing and directing conversations by invoking specialized agents for specific tasks. This ensures that every patient interaction with LOLA remains clinically safe, structured, and within approved guidelines. An Out-of-Scope Detector further enhances control by identifying and redirecting patient inputs that fall outside the intended conversational framework, preventing misinformation and maintaining focus on validated clinical content, while preserving an empathetic tone throughout the conversation.
To ensure unmatched precision, every conversation undergoes post-processing by an Automatic Reviewer that applies more advanced models than those used in real-time. If flagged, a human-in-the-loop further reviews the interaction, guaranteeing that the final precision of information sent to healthcare professionals exceeds 99.9%.
Conversational flows are meticulously designed and coordinated based on clinically validated guidelines and real-world patient interactions. Over the past five years, a clinically rich dataset has been built by collecting and manually labeling hundreds of thousands of real clinical conversations. Each concept and response is tagged with a SNOMED-CT code, ensuring full interoperability across healthcare systems.
With ID-NER achieving over 95% precision and recall for more than 250 medical concepts (and an average accuracy of 98.4%), LOLA ensures that every question asked and every response given is clinically appropriate and relevant, while it leverages the power of LLMs to maintain it conversational.
Safety in healthcare isn’t just about the conversation—it’s also about timely interventions. Our alert-engine system leverages a predefined structurization of alerts, which are deterministic and customizable on a per-patient basis. By incorporating SNOMED-CT assignations, our system ensures seamless interoperability and guarantees that any potential clinical risks are promptly flagged and addressed, thereby further safeguarding patient interactions.
Unlike fully generative AI systems, which carry an inherent risk of hallucinations and unpredictable outputs, this Hybrid AI approach integrates LLMs in a highly controlled manner to enhance patient interactions without compromising clinical accuracy.
As AI continues to revolutionize healthcare, ensuring safety, reliability, and clinical accuracy is paramount. While fully generative AI models have shown promise in enhancing natural language interactions, their lack of deterministic control and regulatory compliance makes them risky to deploy at scale in enterprise applications.
This is why Hybrid AI is setting a new standard, combining structured, clinically validated frameworks with the adaptability of LLMs to create an AI system that is both highly scalable and medically reliable.
With over five years of real-world deployment, Tucuvi’s Hybrid AI architecture powers LOLA, a trusted and CE-certified AI clinical agent that seamlessly integrates into healthcare workflows, enabling safe, efficient, and intelligent patient management.
AI already plays a critical role in supporting clinical teams as they face growing challenges, from workforce shortages to the increasing demand for patient care. By combining technological innovation with medical rigor, Hybrid AI ensures that healthcare professionals have the tools they need to deliver high-quality, patient-centered care.
References:
Whether you want to scale your capacity of care, automate repetitive tasks, improve care team efficiency, or reduce relapses through early interventions, we have a solution for you.
Fill out the form and our team will get in touch with you soon.