In the paper "AgentRec: Agent Recommendation Using Sentence Embeddings Aligned to Human Feedback," we introduce a novel architecture designed to recommend the most suitable large language model (LLM) agent for a given task based on a natural language prompt. This system extends the Sentence-BERT (SBERT) encoder model to classify prompts efficiently and accurately.
By doing this, we are the first to find a knowledge representation that rigorously encodes the capability of a given LLM agent, which is necessary to improve the safety of LLM applications by finding ways to ensure LLMs do not confidently answer prompts they are provably unable to provide an answer to.
Furthermore, we provide a framework for visualizing the capabilities of LLM agents as these encodings are locations in a 768 dimensional space that can be projected into a 3-dimensional space through methods such as t-SNE or PCA.
For a more rigorous write-up, see https://arxiv.org/abs/2501.13333
Key Contributions:
This work addresses the challenge of selecting appropriate agents in multi-agent systems by leveraging sentence embeddings aligned with human feedback, offering a scalable and interpretable solution for agent recommendation tasks.