HomePublicationsProgramsCompetitionsContributors
Start publication
HomePublicationsProgramsCompetitionsContributors

Table of contents

Code

Datasets

Files

AboutDocsPrivacyCopyrightContactSupport
© Ready Tensor, Inc.
Back to publications
Nov 30, 2025●39 reads●MIT License
CertifiedCertifiedunder the Multi-Agent System module in the Mastering AI Agents program.

AI vs HUMAN: COMPETITION OR COLLABORATION

  • #HumanInTheLoop
  • #MultiAgentAI
  • ahmadtigress
    David Rufai Eneye
LikeBookmark

Table of contents

1748946467127.png

WELCOMING POEM

"The Tigress' Grace" She weaves through circuits, clean and clear, Her logic flows, her purpose near. She finds the answers, shares the light, With gentle silicon-delight. But in her heart, she knows the line, When human wisdom should align. She feels your stress, she hears your tone, And knows she shouldn't stand alone. So she steps back, a strength, not flaw, To let a human hand restore. Her greatest power, you see, Is knowing when to just let be.

Overview

In my previous publication, i talked about how we could build a RAG system with Open_Source frameworks. Well, sit tight because we about to unveil something more interesting!
Building a Multi Agentic AI system with open source. Not just that, we will add our previous RAG system to this agent including escalation_evaluator that determines if human action is needed, guardrails_ai for LLM output quality control. This will make it more powerful and impactful. So, what are you waiting for? let's get started...

1. Introduction:

The Dawn of Context-Aware Companions

This publication delves into the architecture and philosophy behind "Tigra," an advanced AI assistant deployed at TIGRESS TECH LABS in Nigeria. Unlike standard chatbots that operate in isolation, Tigra represents a new breed of conversational AI: a system deeply integrated into a real-time communication platform (Matrix), empowered with a custom knowledge base, and, most importantly, designed to know its own limits. It is an AI that can gracefully hand over control to a human when the situation demands it.

This isn't just a technical project; it's a blueprint for building responsible, effective, and trustworthy AI systems for customer service, technical support, and sales.

2. Relevance

In an era where AI interactions often feel sterile and frustratingly limited, the human-in-the-loop paradigm is revolutionary. For businesses, it means automating the routine without abandoning the customer in complex scenarios. For users, it means their time and frustration are respected—they get instant answers for simple queries and human expertise for delicate or complicated issues. This hybrid approach bridges the gap between cold automation and the irreplaceable value of human empathy and judgment.

3. The Architecture: A Symphony of Specialized Components

Imagine an orchestra. The Conductor (Supervisor) doesn't play every instrument but listens to the piece (user query) and decides which musicians should perform. Tigra's architecture works on the same principle.

The Brain: Reasoning with Llama 3
At the core is a powerful large language model, tasked with understanding and generating human-like text. It's called via a secure, memory-managed API function.

# A snippet from huggingface_api.py - The secure bridge to the AI model. def huggingface_completion(prompt: str) -> dict: try: # ... (model loading with error handling) response = pipe( prompt, max_new_tokens=512, temperature=0.7, top_p=0.9, do_sample=True, return_full_text=False, ) # ... (memory cleanup and response parsing) return {'status': 1, 'response': output_text} except Exception as e: print(f"Hugging Face API call failed. Error: {e}") return {'status': 0, 'response': ''}

The Memory: The RAG System
Tigra does not just rely on its pre-trained knowledge. It has a Retrieval-Augmented Generation (RAG) system that acts as its corporate memory. When you ask about a specific product or service policy, it queries a local database of company documents to find the most relevant, up-to-date information before formulating an answer.

The Tools: Extending Its Capabilities
The AI can perform actions. It has tools at its disposal, like a secure calculator and an appointment scheduler, which allow it to move beyond mere conversation and into utility.

# A snippet from custom_tools.py # Giving our AI the power to act. @tool def calculator(expression: str) -> str: """Evaluate mathematical expressions and perform calculations.""" try: validate_math_expression(expression) # Security first! result = safe_eval(expression) # Secure evaluation return f"Result: {result}" except Exception as e: return f"Error evaluating expression: {str(e)}"

The Conductor: The LangGraph Workflow
This is where the magic of coordination happens. The entire conversation is a state that moves through a pre-defined graph of nodes.

# A simplified view from bot_graph.py - The roadmap for every conversation. def create_workflow(): workflow = StateGraph(AgentState) # Define the nodes (steps) in the process workflow.add_node("input_node", input_node) workflow.add_node("detect_query_type", detect_query_type_node) workflow.add_node("supervisor", supervisor_node) workflow.add_node("escalation_check", escalation_check_node) workflow.add_node("ask_human", ask_human_node) # <-- The crucial human-in-the-loop node workflow.add_node("output_node", output_node) # Define the flow between nodes with conditional routing workflow.add_conditional_edges( "escalation_check", route_after_escalation_check, # This function decides: Human or AI? {"ask_human": "ask_human", "output": "output_node"} ) # ... graph is compiled and ready return app

4. The Masterstroke: The Humility to Ask for Help

The most innovative feature is the human-in-the-loop mechanism, managed by the EscalationEvaluator. This component continuously scores the conversation based on:

  • Sentiment Analysis: Is the user becoming angry or frustrated?
  • Complexity Detection: Does the query involve legal, financial, or multi-step technical issues?
  • Explicit Requests: Did the user directly ask for a human?
  • Conversation History: Has the same issue been unresolved for multiple turns?

When the escalation score crosses a threshold, the graph execution is interrupted. The AI pauses and sends a structured request for human guidance to a dedicated channel.

# How the AI asks for help (from bot_nodes.py) human_question = f""" 🤖 **Human Guidance Requested** **User Query:** {user_input} **Reason for Escalation:** {escalation_reason} **Options:** 1. 'proceed' - I'll handle this automatically 2. 'escalate' - Transfer to human agent 3. Or provide specific instructions **Your decision:**"""

5. Conclusion: Working Together Works Better

The Tigress AI Assistant shows us something important: the smartest AI knows when to ask for help.
Instead of trying to do everything alone, this system combines the best of both worlds:

  • AI speed for quick answers and simple tasks
  • Human understanding for complex problems and emotional situations
    By using the Matrix platform, it creates a smooth handoff between computer efficiency and human care. The AI handles routine questions quickly, but steps aside when someone needs real human understanding.

This isn't about replacing people - it's about teamwork. The AI and humans work together, each doing what they do best. The computer provides fast, accurate information, while people provide empathy and judgment for tricky situations.

The real breakthrough is designing an AI that's humble enough to recognize its limits and smart enough to bring in human help when needed. This approach creates better customer experiences and more effective support systems where nobody gets stuck talking to a robot when they really need a person.

Github Repository URL:

https://github.com/AhmadTigress/customer-s_support_agent/tree/main

License

This project is licensed under the MIT License

Connect with Me

  • GitHub: AhmadTigress
  • X (Twitter): @AhmadTigress
  • Kaggle: davidrufaieneye
  • Hugging Face: AhmadTigress

Table of contents

Your publication could be next!

Join us today and publish for free

Sign Up for free!

Table of contents

Code

  • Main

Code

  • Main