In the rapidly evolving world of LLMs, we are starting to see massive performance gains when these models are allowed to reason over problems before making decisions, writing code, or taking actions. Larger AI research organizations like OpenAI and Google are able to use methods like training, supervised fine-tuning, and reinforcement learning to teach their models how to reason over problems to improve their response quality, but those models have all been closed-source until very recently. I theorized that recursively calling non-reasoning models with clever prompting, I could simulate that reasoning using existing technology to improve the response quality of existing models.
Using the agent group chat functionality integrated into Semantic Kernel from AutoGen, I created 3 agents:
Initial Agent (Initial):
Role: Serves as the first point of contact, interpreting the user's question and setting the stage for deeper analysis.
Instructions: Guides the Thinker agent to begin the reasoning process based on the user's input.
Thinker Agent (Thinker):
Role: Performs iterative, step-by-step reasoning to explore different aspects of the question.
Instructions: Continues thinking until it determines that it has sufficiently explored the topic.
Answerer Agent (Answerer):
Role: Provides the final, comprehensive answer by synthesizing the Thinker's reasoning.
Instructions: Delivers an accurate and complete response to the user's query.
This architecture allows the agentic workflow to parse the incoming query, reason over it until it feels it has done enough analysis, and then using the analysis generated, give a strong response. First, the Initial Agent interprets the user's question, rephrasing for clarity and giving the Thinker agent some things to consider. Using the prompt, I direct the Thinker agent to consider the text and spirit of the user's inquiry, and consider it from all facets. It will reason through the problem and direct the Answerer agent to generate a final response once it is done thinking.
I found that using this technique, answers are more thorough, well thought-out, and focused. My testing showed increased adherence to the prompt, without as much extraneous information irrelevant to the user's prompt. Code generated tended to be higher quality using better practices, improved optimization, and better error handling.
An example of an interaction with this app:
Can you explain how photosynthesis works in plants?
Please begin thinking about how to explain the process of photosynthesis in plants to the user.
First, consider the role of chlorophyll in capturing sunlight.
Next, think about how plants convert carbon dioxide and water into glucose and oxygen.
Also, reflect on the importance of photosynthesis in the global ecosystem.
I am done thinking.
Photosynthesis in plants is a process where chlorophyll in the leaves captures sunlight to convert carbon dioxide from the air and water from the soil into glucose, a type of sugar that provides energy for the plant. Oxygen is released as a byproduct. This process is fundamental to life on Earth as it produces the oxygen we breathe and forms the base of the food chain.
There are no datasets linked
There are no models linked
There are no datasets linked
There are no models linked