Medicine often begins in uncertainty. A patient comes in short of breath. That could be asthma, pneumonia, heart failure, a blood clot in the lung, or anxiety. The exam, the oxygen level, an X-ray, and basic blood tests each add a piece. Sometimes the X-ray is normal while a blood test is high. Sometimes the picture points to more than one problem. Most cases are a tangle of test results, histories, signs, and words. They rarely line up neatly. They overlap, contradict, and sometimes mislead.
For decades, computer scientists tried to tame this complexity with rules. If fever, an elevated white blood cell count, and an opacity on X-ray, then pneumonia. On paper, the logic looked airtight. In the clinic, it crumbled. Pneumonia without fever is common. White cells can be elevated for many other reasons. Chest pain can point just as easily to the heart. The rules multiplied endlessly. The certainty never improved.
The Shiny Surface of Deep Learning
Today’s deep learning and language models look more advanced, even dazzling. A neural network can scan millions of records and produce probabilities with impressive accuracy. Yet the weakness is the same. When faced with rare symptoms, conflicting tests, or multiple conditions, their confidence collapses. Models learn surface correlations rather than causes, so a change in hospital workflow or a new imaging device can break them. They struggle with missing or delayed data, with counterfactual questions such as what to do if a test cannot be ordered, and with evolving patient populations. They offer little accountability, since they rarely explain how they arrived at a decision. The surface is glossier than the old rules, but the brittleness remains underneath.
Inesse: Intelligence as Communication
During my doctoral research in the late 1980s, I asked a different question. What if medicine were not a set of rules at all? What if it were a conversation? Every symptom, every test, every disease, every drug, even the patient’s own words could be treated as an intelligent participant, speaking, listening, and adapting.
I called these entities Inesse, from the Kabyle word meaning “to say,” “to express,” or “to tell,” which is the way we communicate. Intelligence emerges from dialogue, where signals are exchanged, contradictions argued, and agreements reached. This insight became the foundation of my Smart Agents architecture, first presented in my Phd thesis in 1988.
From Theory to the First Applications
The academic notion of an “intelligent agent” existed before my work, but it was not applicable. There were too many abstract theories and too many ideas with no path to deployment. My focus was never on theory for its own sake but on creating AI that delivered real-world solutions. That is why I chose to pursue my PhD not in isolation within university research, but in direct confrontation with real-world challenges. Those challenges forced me and allowed me to invent a new Smart Agent architecture that could actually be deployed.
Everything Is a Smart Agent
In my PhD work, everything was an agent, including the attributes that describe each entity. Fever. Cough. Chest pain. Fatigue. Traditional systems treated them as static data points. Smart Agents gave them voices. A fever agent did not just announce that temperature was high. It collaborated. With a cough agent, it strengthened the suspicion of infection. With weight loss, it leaned toward tuberculosis. With chest pain, it raised a pneumonia suspicion or a cardiovascular alarm. Meaning came from interaction, not isolation. Context shaped every signal.
A Smart Agent had a role, local knowledge, goals, and actions. It exchanged messages, updated its beliefs when new evidence arrived, and proposed next steps. Coordination emerged from many small decisions that aligned toward a shared aim.
Lab tests, imaging, and specialist opinions were Smart Agents too, with their own voices. A lab test spoke in numbers. An X-ray spoke in shadows. A doctor’s judgment spoke from experience. None was absolute. Together, they joined the dialogue. Unlike today’s black-box AI, Smart Agents kept the reasoning visible. Doctors could see the debate as it unfolded.
Visibility was built in. Every message, assumption, and update was recorded. One could see which agents agreed, which objected, and why a conclusion prevailed. Instead of a single confidence score, there was a transparent explanation that could be audited and taught.
Diseases themselves acted as Smart Agents. Pneumonia rallied fever and cough to its side. Tuberculosis called on weight loss and night sweats. Autoimmune disease whispered fatigue and pain. They competed, argued, strengthened, or faltered as new evidence arrived. Diagnosis was no longer a single probability number. It became a contest of explanations, revealed in full view.
An Example
A patient presents with cough and mild fever. Cough and fever agents raise infection. The X-ray agent reports clear lungs. The allergy agent notes recent exposure. The system shifts: viral syndrome rises. Later, a film shows faint opacity. Pneumonia returns to contention. The plan adapts in real time.
Treatments are agents too. A side-effect agent warns of allergy. A cost agent highlights constraints. The plan emerges not from a rigid table but from negotiation, as in any real medical team.
Why Smart Agents Work
Not rules. Not a black box. Rules freeze logic and fail on exceptions. Black boxes hide logic and resist correction. Smart Agents keep logic alive and visible. They learn from outcomes, revise beliefs, and explain every step.
- Resilient under missing or conflicting data, surfacing inconsistencies.
- Integrating labs, imaging, histories, and symptoms into a coherent belief state.
- Maintaining multiple active hypotheses for coexisting conditions.
- Requesting the next test or question with the highest value of information.
- Updating plans as evidence shifts while respecting safety, cost, and policy.
This is not a chain of rules, nor a single probability. It is a living debate.
The process of Smart Agents mirrors how meaning works in language. A word alone has no meaning. Meaning lives in sentences and situations. Pregnancy, for instance, acts as a global context that reshapes the weight of every symptom, intensifying some, excluding others. Words reinforce, cancel, or redirect each other, just as symptoms do in medicine.
A language model estimates probabilities by scanning the whole text, letting nearby phrases tilt meaning while distant cues still exert softer pressure. Smart Agents do the same, but with symptoms, tests, diseases, drugs, and patient voices. Each signal carries tentative meaning, revised as new evidence arrives, contradictions are argued, and agreements are reached. Meaning, diagnosis, and treatment emerge not from isolation but from communication in context.
Intelligence From Interaction
Rule-based AI imposes logic from above. Smart Agents grow intelligence from below. They embody four essential qualities:
- Adaptability: new agents can join without redesign.
- Resilience: contradictions drive negotiation, not failure.
- Personalization: the patient’s own voice is built in.
- Emergence: solutions arise from dialogue, not commands.
By comparison, much of modern AI remains brittle and opaque. Bigger networks do not fix this. Communication does.
Smart Agents, The Future of AI
Medicine was the crucible where this principle first proved itself. Symptoms spoke. Diseases argued. Tests challenged. Drugs intervened. Patients guided. From this, a larger vision emerged. Intelligence is not the product of more data or bigger models. It is the product of relationships, context, negotiation, and conversation. The first version of my Smart Agents platform (MINDsuite) was implemented at Necker Hospital in Paris. Before completing my PhD, I founded Conception en Intelligence Artificielle. Our first client was Solvay, the global leader in essential chemicals, where Smart Agents were deployed to supervise large-scale, cross-country network infrastructures. These deployments marked a decisive moment in the history of AI: the birth of Smart Agents as a practical technology, and the first time intelligent agents, once confined to academic theory, were successfully applied in both medicine and industry.
MINDsuite later became the foundation of Brighterion, the company I built into a global leader in mission-critical AI. Brighterion’s Smart Agent technology now protects the world’s financial ecosystem, securing billions of transactions for Mastercard and financial institutions worldwide. It is also deployed in Homeland Security, where it supports national defense and public safety.
Key Takeaway
Smart Agents proved first that intelligence is not built on data alone but on relationships, context, and communication. That truth, born decades ago, is still the future of AI.





























