Artificial intelligence (AI) technology has become important in human life with numerous benefits. This publication investigates one of the benefits which is the development of an intelligent support system for students to enhance their personal development. Then, this publication explores advanced AI models to develop an intelligent chatbot system to provide personalized support and guidance to students. In this research development, this publication uses Beautiful Soup for web scraping to gather Edinburgh Napier University website data. Additionally, this publication integrates transformer model with the Rasa framework and GPT-3 to generate better responses. Moreover, this publication presents a comparative analysis of between two versions of the chatbot system based on system usability scale (SUS). By leveraging AI technologies, this research contributes to improving the field of educational support systems. As continuous learning for intelligent tools, this publication will inspire further research and innovation in AI technologies to address educational challenges to support student needs.
In today, people notice the impact of conversational artificial intelligence (AI) which are 24/7 availability, efficient in customer support and enhanced user experience according to the authors (Suram and Namatherdhala, 2022). That is why people widely use it in different industries such as customer service and support, education, banking or such kind of service sectors. In addition, this paper explores conversational AI which is virtual assistant as personal development tutor (PDT) for students to support individual needs within a short time because of the manual way faces some problems which means that if the PDT is a qualified mentor, students struggle to find the time and resources to engage in learning activities or to seek guidance and the lack of immediate support and guidance that can make frustration and disengagement over time. Similarly, the PDT mentors face their own challenges including 24/7 support and relevant information for different learners. Hence, this paper aims to develop advanced ways using AI model which delivers an experienced personal development for students. When doing this research, the effectiveness of current approaches to develop data collection and text generation is delayed because of some challenges and they are:
- Difficulty in extracting structured data from unstructured web content.
- Limited capabilities of existing text generation models in producing contextually relevant content.
- Addressing the ethical considerations surrounding web scraping practices.
Managing time constraints to do development for conducting the research.
Addressing these challenges is important to the success of this system for students to provide personalized support and guidance by developing advanced methodologies to overcome these challenges, such as using a transformer model to solve limited capabilities and using Miro which is a project management tool, to organize overall works to finish each progress before deadline. In addition, this proposed system targets to provide conversation to students who need information about related modules and finance issues to achieve natural language understanding by developing models and getting contextually relevant responses. Then, setting up some objectives to achieve the targeted system and they are:
- Research on the published related papers.
- Gather data from Napier website using web scraping.
- Integrate Rasa ML/AI framework with transformer model.
- Train the chosen model on relevant data for response generation.
- Evaluate and testing the system.
To address these challenges and objectives in generating contextually relevant responses for PDT, this project proposes to gather data using Beautiful Soup for web scraping and a seamless integration using the Hugging Face transformer model between Rasa framework and GPT-3 API to enhance text generation capabilities. Additionally, the authors (Shen et al., 2023) described that the combination of Hugging Face transformer model and GPT-3 provides enhanced performance, flexibility and adaptability in generating relevant responses.
Then, this paper is structured as follows. This paper provides related works after introduction section and a detailed description of the proposed model including its architecture, methodology and integration with GPT-3 in Method/ Approach section. For the web scraping model, the details will be described in the data collection and preprocessing section. Furthermore, this paper will be provided evaluation method and its result in result and discussion section. Finally, this paper will be concluded with a summary of findings, ethical considerations for conversation AI for PDT and future developments in each related section.
The core approaches of the Conversational Assistant PDT project are encompassed by three key elements. Firstly, conversational emotional intelligence is focused on integrating sentiment analysis to enable emotionally intelligent interactions to be engaged in by the assistant. This involves emotional expressions in students' written responses being detected and responded to with sensitivity and understanding. Secondly, Advanced Natural Language Understanding (NLU) is emphasised, with sophisticated Natural Language Processing (NLP) techniques being leveraged to accurately decipher users' emotional states. This capability allows nuanced emotional expressions to be effectively interpreted. Lastly, personalised support for students is aimed to be provided by offering immediate and tailored assistance based on their emotional states and preferences. This ensures that students receive dynamic and responsive guidance tailored to their individual emotional needs, ultimately contributing to their overall well-being and academic success.
A study conducted in Pakistan aimed to develop an Artificially Intelligent (AI) chatbot, named 'Bablibot', designed to assist caregivers by providing information regarding immunisation (Siddiqi et al., 2024). The study intentionally utilised both quantitative and qualitative data collection methods, such as in-depth interviews and user surveys, to enhance the performance and functionality of the chatbot. The researchers plan to leverage incoming 'Bablibot' data to further train the model and explore additional training datasets, considering the increasing popularity of local language chatbots post-study.
This approach underscores the commitment to utilising gathered data to continually improve the chatbot's effectiveness and adaptability. Valuable insights learned from these findings offer prospects for enhancing and potentially extending the utility of our chatbot in similar resource-constrained contexts. The quantitative component of the study involved 20 phone-based interviews with randomly selected participants (Siddiqi et al., 2024).
This qualitative data example has been incorporated into our own testing, although other methods, such as surveys or in-person sessions, proved to be more suitable approaches. Our surveys were tailored to provide insights into user experiences and understand reasons for non-engagement with the bot. This structured testing process significantly contributed to enhancing the effectiveness and usability of our chatbot as an academic support tool.
As part of our research, we conducted role-played conversations between students and tutors to identify emotion-driven dialogue, aligning with previous studies proposing an emotion generation model. The method employed by (Zhao et al., 2024) utilized an exploratory technique to improve data quality, employing a pretrained “affective generation model” used for sentiment analysis and dialogue recognition. Additionally, they extracted context features from conversation text and utilized an algorithm to classify mixed data, enabling training of the bot from structured conversations.
In our research for dataset gathering, we opted for the wizard of Oz technique to achieve a comparable outcome, simulating authentic conversations between students and their tutors on specific topics. This is a well-established and effective testing method for AI chatbot testing and supporting research objectives with little technical requirements (Kuang et al., 2023). This approach allowed us to replicate genuine interactions and capture emotion-driven dialogue for analysis.
In virtual assistant platforms, both Amazon Alexa and Microsoft Cortana are leading examples, offering a range of functionalities and capabilities achieved with advanced natural language processing (NLP) technologies (Major et al., 2022). User queries undergo understanding and processing by leveraging large datasets and employing sophisticated algorithms to enhance their language models (Nguyen-Mau et al., 2024). Amazon Alexa excels in consumer-oriented tasks like smart home control and entertainment, while Microsoft Cortana specializes in productivity and workplace functions such as calendar management and email integration (Major et al., 2022).
The effective combination of Dialogue, Natural Language Processing (NLP), and datasets is fundamental to the advancement of AI platforms, allowing for the development of tailored skills and seamless integrations. However, amidst the emphasis on technical capabilities, the crucial element of user experience often remains overlooked (Adesina, n.d.).
In spoken language interaction, the quality of dialogue is a determinant of user satisfaction and engagement. (Gašić et al., 2017) highlights the multifaceted nature of dialogue quality, detailing an assessment through mechanisms such as reward functions and dialogue policy. This dialogue policy, designed to map belief states to system actions, stands as a pivotal method for optimizing dialogue quality. The 'belief state', derived from N-best hypotheses, is the guiding force behind dialogue policy decisions, having a profound influence on the overall conversational experience.
Our academic support chatbot draws inspiration from this, by leveraging their functionalities and capabilities, playing an important role in assisting students throughout their academic journey. While technical advancements drive the capabilities of AI platforms, it is imperative to prioritize the refinement of dialogue quality to ensure meaningful user interactions and heightened user satisfaction (Venkatesh et al., n.d.).
This conference paper provides an in-depth analysis of a Conversational Assistant Personal Development Tutor (PDT) project, focusing on its system architecture, natural language understanding (NLU), and natural language generation (NLG) processes. The project integrates various technologies such as Rasa ML/AI, web scraping, and GPT-3 model from OpenAI to facilitate an effective question and answer as well as emotional detection. Through a combination of curated datasets, web scraping, and advanced AI models, the PDT aims to provide personalised and empathetic responses to user inquiries and concerns. This paper delves into the architectural design of the system and elucidates the key components involved in NLU and NLG processes, shown in Figure 1.
Figure 1: System Architecture
The system architecture of the Conversational Assistant PDT encompasses several interconnected components that work collaboratively to facilitate seamless interactions with users. At its core, the architecture comprises:
The project utilises the Rasa framework for developing the conversational agent. Rasa provides tools for NLU, dialogue management, and NLG, enabling the system to understand user intents, maintain context, and generate appropriate responses (Mathur et al., 2021; Bocklisch et al., 2017).
Below are some of the Rasa configuration. Here is the link to the full repository: PDT-bot
nlu.yml
version: "3.1" nlu: - intent: credits examples: | - What are credits for [Software Development](subject) - What are credits for [Web Technologies](subject) - What are credits for [Masters Dissertation](subject) - credits for [Performance Studies: Integrated Musicianship](subject) - credits for [Civil Engineering Materials](subject) - credits for [Computer Systems](subject) - intent: course_fee examples: | - what is the course fee for [Msc Computing](fee) - what is the course fee for [Msc Drug Design and Biomedical Science](fee) - what is the course fee for [Msc Global Logistics and Supply Chain Analytics](fee) - intent: fee_installments examples: | - I have a problem with my fee instalments - Fee Installments Problem - Fee problem - intent: less_installment examples: | - What if I pay a less agreed instalment than a actual amount? - what if i pay less installment - i cant pay full installment at single payment - intent: remaining_fee examples: | - Will the remaining fee added to the next instalment automatically? - will my installment add automatically to my next payment
rules.yml
version: "3.1" rules: - rule: Say goodbye anytime the user says goodbye steps: - intent: goodbye - action: utter_goodbye - rule: Say 'I am a bot' anytime the user challenges steps: - intent: bot_challenge - action: utter_iamabot
stories.yml
version: "3.1" stories: - story: student_representative_story steps: - intent: student_representative - action: utter_student_representative - story: hopeless_module_story steps: - intent: hopeless_module - action: utter_ hopeless_module - story: supervisor_for_dissertation_story steps: - intent: supervisor_for_dissertation - action: utter_supervisor_for_dissertation - story: need_dissertation_goals steps: - intent: need_dissertation_goals - action: utter_need_dissertation_goals - story: seek_guidance_module_selection_story steps: - intent: seek_guidance_module_selection - action: utter_seek_guidance_module_selection - story: Course_Fee_story steps: - intent: course_fee - action: action_extract_course_fees - story: feeling_frustrated_with_financial_issues_story steps: - intent: feeling_frustrated_with_financial_issues - action: utter_feeling_frustrated_with_financial_issues
To enrich the knowledge base of the PDT-bot, a web scraping module is integrated into the system. This module utilises BeautifulSoup to extract information from the institution's PDT website, ensuring that the responses provided to users are up-to-date and relevant.By extracting and processing statistics through web scraping, the machine gives reliable and contemporary responses to queries associated with direction credits, charges, and different information subject to periodic adjustments. This multi-faceted approach, combining advanced language fashions and net scraping, significantly complements the system's capacity to realize and respond successfully to an extensive range of scholarly queries, mainly to an extra fine consumer revel. Below if the sample code implementation for the implementation approach:
action.py
from rasa_sdk import Action, Tracker from rasa_sdk.executor import CollectingDispatcher from typing import Any, Text, Dict, List from rasa_sdk.events import SlotSet import requests from bs4 import BeautifulSoup from typing import Text, List, Dict class ActionExtractCredits(Action): def name(self) -> str: return "action_extract_credits" def fetch_credits_online(self, code): url = f"https://www.modules.napier.ac.uk/Module.aspx?ID={code}" try: page = requests.get(url) soup = BeautifulSoup(page.content, "html.parser") credits_span = soup.find("span", id="ctl00_ContentPlaceHolder1_LBLscqlvalue") credits_text = credits_span.get_text().strip() if credits_span else "Credits information not found." return credits_text except requests.RequestException: return "Failed to retrieve data." async def run(self, dispatcher: CollectingDispatcher, tracker: Tracker, domain: dict) -> list: subject = next(tracker.get_latest_entity_values('subject'), None) if subject: subject = subject.lower() subject_codes = { "software development": "SET07402", "web technologies": "SET08220", "masters dissertation": "SOC11101", "performance studies: integrated musicianship":"MUS07138", "civil engineering materials":"CTR07100", "computer systems":"CSN07105", } code = subject_codes.get(subject, None) if not code: credits_text = "No valid code found for the specified subject." else: credits_text = self.fetch_credits_online(code) dispatcher.utter_message(text=f"Credits for {subject}: {credits_text}") return [] class ActionExtractCourseFees(Action): def name(self) -> str: return "action_extract_course_fees" def fetch_fees_online(self, course_url_suffix): base_url = "https://www.napier.ac.uk/courses/" full_url = f"{base_url}{course_url_suffix}" try: page = requests.get(full_url) soup = BeautifulSoup(page.content, "html.parser") fees_info = {} table_rows = soup.find_all('tr') for row in table_rows: region_cell = row.find('td', class_="one") fee_cell = row.find('td', class_="two") if region_cell and fee_cell: region = region_cell.get_text(strip=True) fee = fee_cell.get_text(strip=True) if fee and fee != '£': fees_info[region] = fee return fees_info if fees_info else "No valid fee data found on the page." except requests.RequestException as e: return f"Failed to retrieve fee information due to: {str(e)}." async def run(self, dispatcher: CollectingDispatcher, tracker: Tracker, domain: dict) -> list: course = next(tracker.get_latest_entity_values('fee'), None) if course: course = course.lower() else: dispatcher.utter_message(text="Please specify the course you are interested in.") return [] course_dict = { "msc computing": "msc-computing-postgraduate-fulltime", "msc drug design and biomedical science": "msc-drug-design-and-biomedical-science-postgraduate-fulltime", "msc global logistics and supply chain analytics": "msc-global-logistics-and-supply-chain-analytics-postgraduate-fulltime", } course_url_suffix = course_dict.get(course, None) if not course_url_suffix: dispatcher.utter_message(text=f"No valid URL found for the specified course: {course}") return [] fee_info = self.fetch_fees_online(course_url_suffix) if isinstance(fee_info, str): dispatcher.utter_message(text=fee_info) else: filtered_fees = {k: v for k, v in fee_info.items() if "please note" not in k.lower() and "discount" not in k.lower()} fee_message = ", ".join([f"{k}: {v}" for k, v in filtered_fees.items()]) dispatcher.utter_message(text=f"Fee information for {course}: {fee_message}") return []
The system additionally employs a rule-based approach to handle common place queries and map specific entities to predefined codes or URLs. This rule- based machine facilitates green dealing with frequently requested questions and streamlines the retrieval of applicable data, providing a continuing revel in for students in search of straightforward answers to common place queries.
This bot takes a multi-pronged approach to handle the various variety of scholarly queries effectively. Initially, the GPT-2 language model changed into implemented to cope with open-ended and context- established queries. While GPT-2 presented enhancements over traditional rule-based structures in producing herbal and contextual responses, its overall performance felt quick whilst managing extra complicated or nuanced queries. To conquer those obstacles, the system leveraged the superior GPT-3.5 language version advanced through OpenAI. GPT-3.5 has proven advanced performance in producing herbal, contextually applicable, and insightful responses.
By harnessing this trendy model, the gadget can engage in extra human-like conversations and showcase a deeper knowledge of the query's context and nuances. With GPT-3.5 at its center, the machine can provide more coherent, appropriate, and insightful solutions to complex or open-ended questions, surpassing the limitations of GPT.
action.py
from rasa_sdk import Action, Tracker from rasa_sdk.executor import CollectingDispatcher from typing import Any, Text, Dict, List from rasa_sdk.events import SlotSet import requests from bs4 import BeautifulSoup from typing import Text, List, Dict class ChatGPT(Action): def __init__(self): self.url = "https://api.openai.com/v1/chat/completions" self.model = "gpt-3.5-turbo" api_key = '' self.headers = { "Content-Type": "application/json", "Authorization": f"Bearer {api_key}" } self.prompt = "Answer the following question, based on the data shown. " \ "Answer in a complete sentence and don't say anything else." def name(self) -> Text: return "action_generate_response" def run(self, dispatcher: CollectingDispatcher, tracker: Tracker, domain: Dict[Text, Any]) -> List[Dict[Text, Any]]: user_input = tracker.latest_message['text'] content = self.prompt + "\n\n" + user_input body = { "model": self.model, "messages": [{"role": "user", "content": content}] } response = requests.post( url=self.url, headers=self.headers, json=body, ) answer = response.json()["choices"][0]["message"]["content"] dispatcher.utter_message(text=f"{answer}") return []
In addition to predefined responses and web-scraped information, the system leverages OpenAI's GPT-3 model for generative responses. GPT-3 employs deep learning techniques to generate human-like text based on the context provided, enhancing the system's ability to engage in natural and meaningful conversations (Sharma, 2020).
To enable user interaction, the system features a web- based chatbot interface accessible via a browser. This interface provides users with an intuitive platform to communicate with the PDT and seek assistance with their queries or concerns.
Figure 2: Chatbot Interface
The NLU and NLG processes play pivotal roles in enabling the system to understand user inputs and generate appropriate responses. The following steps outline how the system interacts with users, processes their inputs, and generates responses:
User Input Processes:
- Upon receiving a user input, the system employs NLU techniques to parse and extract relevant information such as intents, entities, and sentiment (Jiao, 2020).
- The NLU component analyses the user's query and determines the underlying intent, which could range from seeking information to expressing emotions or concerns.
Intent Classification and Entity Recognition:
- Using trained NLU models based on curated datasets, the system classifies the user's intent and identifies any pertinent entities mentioned in the input.
- Intent classification enables the system to understand the purpose behind the user's query, while entity recognition helps in extracting specific details relevant to the inquiry.
Response Generation:
- Based on the identified intent and extracted entities, the system proceeds to generate an appropriate response. This response can be derived from predefined templates, web- scraped information, or generated dynamically using the GPT-3 model.
- NLG techniques are employed to structure the response in a coherent and contextually appropriate manner, ensuring that it effectively addresses the user's query or concern.
Emotional Detection and Response:
- In addition to providing information, the system is equipped to detect emotional cues in user inputs using sentiment analysis techniques.
- Upon detecting emotional expressions such as distress or frustration, the system responds empathetically, offering support and guidance tailored to the user's emotional state.
The Conversational Assistant PDT project exemplifies the integration of advanced AI technologies to create a responsive and empathetic virtual tutor. By leveraging the capabilities of Rasa ML/AI, web scraping, and GPT-3 model, the system adeptly handles user queries, provides accurate information, and offers emotional support when needed. The architectural overview and elucidation of NLU and NLG processes presented in this paper underscore the system's sophistication and its potential to enhance the learning and well-being of students.
Further research and refinement of the system could lead to even greater effectiveness in addressing a wide range of academic and personal concerns.
The bot leverages advanced deep mastering strategies to allow robust herbal language know-how (NLU) competencies through the Rasa framework. NLU is a crucial thing in conversational AI systems like Rasa because it translates user messages to realise their underlying intents and extract applicable statistics.
This functionality is important for the machine to interact in significant dialogues and provide suitable responses. At the core of Rasa's NLU element lies deep studying fashions chargeable for purpose class and entity extraction. The intent category includes determining the intention or reason behind a person's utterance, at the same time as entity extraction identifies and extracts specific pieces of information from the utterance, which include route names, problem codes, or fee-associated information. Rasa employs state-of-the-art deep mastering architectures to system person messages and make correct predictions approximately their intents and entities.
The deep learning fashions hired in Rasa's NLU component are tremendously adaptable and may be first-class-tuned or retrained as new training statistics turn into be had, making sure that the machine remains updated and responsive to evolving language patterns and user behaviours. By leveraging deep mastering techniques, the bot can appropriately interpret personal inputs, classify their intents, and extract applicable entities, forming the muse for meaningful conversations, answering questions, and supplying suitable responses to user inquiries. This deep learning-powered NLU issue is a vital factor in the device's typical capability to deliver a continuing and shrewd conversational experience to college students.
Data series lies at the heart of this process, concerning the gathering of conversational facts from numerous sources and changing it right into a format appropriate for training AI fashions. In this unique case, transcriptions of conversations had been gathered and converted into YAML layout, a structured and human-readable markup language.
This method lets in for the systematic agency of dialogues, intents, and corresponding responses, facilitating seamless integration into the training pipeline. Once collected, the raw conversational records undergo a rigorous preprocessing section to make sure it's satisfactory and suitable for training AI models. The transformation of transcriptions into YAML layout represents a pivotal step in the statistics collection and preprocessing pipeline for conversational AI.
The Wizard of Oz technique, as described by (Kuang et al., 2023), involves simulating the functionality of an automated system by having a human operator manually perform the tasks typically carried out by the system. In the context of our study, this technique was employed to mimic authentic conversations between students and their Personal Development Tutor (PDT) on real topics, particularly issues related to modules. Each scenario was carefully crafted to incorporate an emotional undertone aimed at eliciting various responses from the participants, thereby enriching the range of labels obtained from the transcriptions.
Using participants testing, detailed notes were taken during each test session, focusing on observing the participants' engagement with the system and their interactions with it. All issues encountered with the interface were also documented.
Chatbot v1: In the first task, participants engaged in a dialogue about academic stress, while the second task involved discussing the process of selecting a supervisor for their dissertation module.
Key points from participant testing notes:
- Often received no response.
- Participants had to re-word or simplify their input.
- Responses were unhelpful and asked clarifying questions about information already provided by the participant.
- The bot repeated responses, hindering task completion.
The SUS forms yielded generally low scores, averaging 29.17% (see Figure 3). Conversations were recorded in the chatbot and later classified to enhance the NLU component of Rasa.
Figure 3: SUS Chatbot Testing v1
Chatbot v2: The second version incorporated the updated intents and the utilisation of GPT-3.5 turbo, serving as the default response for unrecognised inputs. SUS results significantly improved, averaging 67.5% (see Figure 4).
Figure 4: SUS Chatbot Testing v1
Ethical considerations are foundational in the development of chatbots, necessitating meticulous attention to aspects such as privacy, bias, and potential user manipulation. In our project, these concerns were paramount, evidenced by the formation of a dedicated team tasked with managing dialogues (DM Team) to uphold the integrity of conversational flow. In a more recent development, the design process of chatbots places a greater emphasis on openness, fairness, and user consent. The significance of ethical practices in technological innovation is highlighted by industry standards and guidelines like the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems and the ACM Code of Ethics (Chatila et al., 2017).
Our project underscores a seamless collaboration among the Natural Language Processing (NLP), Natural Language Understanding (NLU), and Dialogue Management (DM) teams, ensuring the prioritization of ethical considerations. Furthermore, real-world scenario datasets for training the Rasa model were obtained through live interviews with student Personal Development Tutors (PDT), enriching our research with practical insights. In the realm of chatbot development, ethical considerations remain paramount. Transparency is vital to ensure users are aware of interacting with automated systems (Liu & Singh, 2020), while privacy and data protection must adhere to regulations like GDPR or CCPA (Acquisti et al., 2015; Liu & Singh, 2020). Chatbots must offer accurate information, avoid bias (Molla et al., 2019), and provide fair and inclusive interactions (Miconi et al., 2018).
Recent chatbot applications, spanning mental health support, customer service, education, and healthcare, exemplify adherence to these ethical principles. By embracing such principles, developers can create chatbots that prioritize user trust and well-being, advancing the ethical frontier of AI-driven interactions.
This paper explores the potential of artificial intelligence (AI) in creating an intelligent support system for students, focusing on a personal development tutor chatbot. By integrating advanced AI models like GPT-3.5 with the Rasa framework, the chatbot provided personalized support and guidance, yielding promising results. The second version of the chatbot showed significant improvement in system usability scale (SUS) scores compared to the initial version, demonstrating the potential of advanced AI in educational support.
The research addressed challenges such as extracting structured data from unstructured web content and navigating ethical considerations in web scraping. Advanced methodologies and project management tools facilitated the successful development of the chatbot system. Moreover, the comparative analysis of the chatbot versions highlighted the effectiveness of advanced AI models in creating satisfying interactions, emphasizing the importance of cutting- edge technologies in educational support systems.
Future works could focus on fine-tuning GPT-3.5, which would improve response accuracy and contextual relevance. Additionally, enhancing semantic understanding would help extract precise meaning from student input. By prioritizing data security, the system can better protect sensitive information. Moreover, integrating sentiment analysis would support emotional needs, while adding voice assistance would boost accessibility. Expanding conversation topics to include career advice, personal guidance, and campus activities would provide comprehensive support.
Last but not least, we acknowledge the crucial contributions of team members and collaborators (Idowu Kennedy Yinusa, Aaron Darmudas, Yeshwasri Bokka, Saw Yu Nandar, Aye Thandar Phyo, Yash Bhardwaj, and Ayoola Stephen Ogunsola). Special thanks go to Dr. Yanchao Yu for assisting with data gathering, Dr. Md Zia Ullah for his invaluable guidance, and Michael Orme for his support. Their contributions were instrumental in achieving our goals and creating a more inclusive educational experience.
In conclusion, developing the intelligent chatbot system with advanced AI models marks significant progress in enhancing educational support systems. This research contributes to the ongoing exploration of AI technologies to address educational challenges and support student needs. Future innovation may further improve AI-powered educational support systems, benefiting students' personal development and academic success.
There are no models linked
There are no datasets linked
There are no datasets linked
There are no models linked