Github: link
Modern LLMs, including the +400B ones lack specialized knowledge required for solving complex issues, and answering deeply technical questions.
A comprehensive context-providing system should allow LLMs of all sizes to gain knowledge well outside the scope of the base language model.
A self-prompting approach will allow for any available downtime to be used for knowledge expansion in perpetuity.
Deploy every single worker, database and utility simultaneously
sudo docker-compose -f docker/docker-compose.yml up
Please note, that webui frontend has to be launched separately. see below
user interaction
Frontend is launched separately to back end, run the following command to start it.
cd webui/frontend/
npm install
npm run dev
http://localhost:3000/
in your browserpgAdmin
, which is already included in the docker compose,localhost:8081/browser/
add new server
name
, write postgres
connection
tabhostname/address
write postgres
username
write admin
and in password
write pass
save
, the database should be immediately availablecore/models/configurations
folder.3.9
environment.yml
is the linux env, but for macOS (silicon) and windows there are other availableA set of workers watching for new tasks 24/7.
This is the tool part of this project, alone it cannot schedule new tasks,
unless as a side effect of a previous task.
Consists of 3 infinitely scalable workers, which together with a 4th auto scheduling worker app,
create a complete loop.
This design allows for near infinite scalability potential.
Built in examples of how to interact with Research Chain, the ones
that we offer are WebUI, with plans for AutoScheduler and NewsScheduler coming soon.
These services automatically dispatch, analyze and manage Research Chain.
AutoScheduler and NewsScheduler should work alongside Web Interface,
to supply constant 24/7 knowledge and news analysis,
and to expand its knowledge base by scheduling crawls based on the provided areas of interest.
There are no datasets linked
There are no datasets linked