AI Core (chatbot)¶
This package implements the core AI logic of KusiBot, designed as a Multi-Agent System.
Agents¶
Manager Agent¶
- class kusibot.chatbot.manager_agent.ChatbotManagerAgent[source]¶
Central chatbot class that coordinates the different agents to generate a response to the user.
The chatbot is composed of the following agents: - IntentRecognizerAgent: Determines the intent of the user input. - ConversationAgent: Generates a response to the user input for “NORMAL” intents. - AssesmentAgent: Generates a response to the user input for the rest of the intents and start and assesment.
- intent_recognizer¶
The shared singleton instance of the intent classification agent.
- Type:
- conversation_agent¶
The agent responsible for handling general, non-assessment conversation.
- Type:
- assesment_agent¶
The agent responsible for administering mental health questionnaires.
- Type:
- assessment_repo¶
Repository for assessment data access, used to check the current assessment state.
- Type:
- _handle_response_when_no_assesment(user_input, conversation_id)[source]¶
Handles the response generation when there is no active assessment. Determines the intent of the user input and decides whether to start an assessment or return a normal response.
- Parameters:
user_input – The message sent by the user.
conversation_id – ID of the current conversation.
- Returns:
JSON response containing the intent detected, response generated, and agent type.
- Return type:
agent_response
- generate_bot_response(user_input, user_id, conv_id)[source]¶
Orchestrates agents to generate a bot response based on the user input and the current conversation.
- Parameters:
user_input – The message sent by the user.
conv_id – ID of the current conversation.
assesment_active – Boolean indicating if an assessment is currently active for the user.
- Returns:
JSON chatbot response.
- Return type:
response
Intent Recogniser Agent¶
- class kusibot.chatbot.intent_recognizer_agent.IntentRecognizerAgent(*args, **kwargs)[source]¶
BERT-based intent classifier agent. This class is responsible for predicting the intent of the user’s input.
- device¶
The computing device (CPU or CUDA GPU) on which the model is running.
- Type:
torch.device
- tokenizer¶
The tokenizer for preprocessing text to match the BERT model’s input format.
- Type:
BertTokenizer
- model¶
The fine-tuned BERT model loaded from the Hugging Face Hub.
- Type:
BertForSequenceClassification
- label_mapping¶
A dictionary mapping intent labels to their corresponding numerical class indices.
- Type:
dict
- reverse_label_mapping¶
A dictionary mapping numerical class indices back to their intent labels.
- Type:
dict
- _clean_text(text)[source]¶
Cleans the input text by removing unwanted characters, links, HTML tags, punctuation, and extra whitespace. Also converts the text to lowercase and removes words containing numbers.
- Parameters:
text – The input text to be cleaned.
- Returns:
The cleaned text.
- Return type:
str
- _get_input_tensors_from_text(text)[source]¶
Converts the input text into tensors suitable for the BERT model.
- Parameters:
text – The input text to be tokenized.
- Returns:
A tuple containing the input IDs and attention mask tensors.
- Return type:
tuple
- predict_intent(text)[source]¶
Predicts the intent of the input text using the BERT model.
- Parameters:
text – The input text for which the intent needs to be predicted.
return_confidence – If True, returns the confidence of the prediction.
- Returns:
The predicted intent label. float: The confidence of the prediction (if return_confidence is True).
- Return type:
str
Conversation Agent¶
- class kusibot.chatbot.conversation_agent.ConversationAgent(model_name='mistral')[source]¶
LLM-based conversation agent. This class is responsible for engaging in conversations given the user’s input.
- Parameters:
model_name (str, optional) – The name of the Ollama model to be used. Defaults to “mistral”.
- model¶
An instance of the Ollama language model client. Is None if the connection fails.
- Type:
OllamaLLM | None
- prompt¶
A LangChain prompt template for generating empathetic conversational responses.
- Type:
ChatPromptTemplate
- msg_repo¶
Repository for message data access.
- Type:
- generate_response(text, conversation_id, intent=None)[source]¶
Generates a response based on the user’s input and the conversation context.
- Parameters:
text (str) – The user’s input text.
conversation_id (int) – The ID of the current conversation.
intent (str, optional) – The detected intent of the user’s input. Not used in this agent.
- Returns:
The generated response from the model.
- Return type:
str
Assessment Agent¶
- class kusibot.chatbot.assesment_agent.AssesmentAgent(model_name='mistral')[source]¶
LLM-based assesment agent. Handles assesment conversation flow when distress intent is detected by BERT. It follows a state machine pattern to manage the conversation flow.
- Parameters:
model_name (str, optional) – The name of the Ollama model to be used. Defaults to “mistral”.
- model¶
An instance of the Ollama language model client. Is None if the connection fails.
- Type:
OllamaLLM | None
- prompt_question¶
A LangChain prompt template for generating natural-sounding assessment questions.
- Type:
ChatPromptTemplate
- questionnaires¶
A dictionary containing the loaded structures for all available assessments (e.g., PHQ-9, GAD-7).
- Type:
dict
- state¶
The current state object in the state machine, which determines the agent’s behavior.
- Type:
- conv_repo¶
Repository for conversation data access.
- Type:
- assess_repo¶
Repository for assessment data access.
- Type:
- msg_repo¶
Repository for message data access.
- Type:
- assess_question_repo¶
Repository for assessment question data access.
- _get_question_json(assessment_id)[source]¶
Get the question JSON to ask based on the assesment current state.
- Parameters:
assessment_id – The ID of the current assessment.
- Returns:
The question JSON to ask, or None if not found.
- Return type:
dict
- _load_questionnaires()[source]¶
Load questionnaires metadata and questions from a JSON file.
- Returns:
A JSON containing those data.
- Return type:
dict
- _naturalize_question(question, question_id, user_id)[source]¶
Generate a naturalized question based on the provided question and context (to have a more fluid conversation).
- Parameters:
question – The question to naturalize.
question_id – The ID of the question.
user_id – The ID of the user asking the question.
- Returns:
The naturalized question generated by the model, or the original question if the model is not available.
- Return type:
str
- _transition_to_next_state(state)[source]¶
Transition to the given state in the assessment state machine.
- Parameters:
state (AssessmentState) – The new state to transition to.
- generate_response(user_input, conversation_id, intent=None)[source]¶
Generate a response based on the user input and the current state of the assessment.
- Parameters:
user_input – The input from the user.
conversation_id – The ID of the current conversation.
intent – The intent detected by BERT, if any.
- Returns:
The response generated by the current state.
- Return type:
str
Assessment States¶
- class kusibot.chatbot.assesment_states.base_state.BaseState[source]¶
Base class for all assessment states.
- abstractmethod generate_response(user_input, conversation_id, assessment_id)[source]¶
Generate a response based on the user input and the current state of the assessment.
- Parameters:
user_input – The input from the user.
conversation_id – The ID of the current conversation.
assessment_id – The ID of the current assessment.
- Returns:
The response generated by the current state.
- Return type:
str
- class kusibot.chatbot.assesment_states.asking_question_state.AskingQuestionState[source]¶
State representing the agent asking a question in an assessment.
- generate_response(user_input, conversation_id, assessment_id)[source]¶
Generate a response based on the user input and the current state of the assessment.
- Parameters:
user_input – The input from the user.
conversation_id – The ID of the current conversation.
assessment_id – The ID of the current assessment.
- Returns:
The response generated by the current state.
- Return type:
str
- class kusibot.chatbot.assesment_states.waiting_free_state.WaitingFreeTextState[source]¶
State where the user is expected to provide a free text response to the assesment question.
- generate_response(user_input, conversation_id, assessment_id)[source]¶
Generate a response based on the user input and the current state of the assessment.
- Parameters:
user_input – The input from the user.
conversation_id – The ID of the current conversation.
assessment_id – The ID of the current assessment.
- Returns:
The response generated by the current state.
- Return type:
str
- class kusibot.chatbot.assesment_states.waiting_cat_state.WaitingCategorizationState[source]¶
State where the user is expected to categorize their response by selecting an option from a list.
- generate_response(user_input, conversation_id, assessment_id)[source]¶
Generate a response based on the user input and the current state of the assessment.
- Parameters:
user_input – The input from the user.
conversation_id – The ID of the current conversation.
assessment_id – The ID of the current assessment.
- Returns:
The response generated by the current state.
- Return type:
str
- class kusibot.chatbot.assesment_states.finalizing_state.FinalizingState[source]¶
State for finalizing the assessment after all questions have been answered.
- generate_response(user_input, conversation_id, assessment_id)[source]¶
Generate a response based on the user input and the current state of the assessment.
- Parameters:
user_input – The input from the user.
conversation_id – The ID of the current conversation.
assessment_id – The ID of the current assessment.
- Returns:
The response generated by the current state.
- Return type:
str