ChatGPT, Claude, Perplexity, and Gemini integrations for chat, real-time information retrieval, and text processing tasks, such as paraphrasing, simplifying, or summarizing.
With support for third party proxies and local LLMs.
Converse with your Primary via the ask
keyword, Universal Action, or Fallback Search.
Sticky Preview example:
Type !
to see all filters currently available and hit ↩ to apply it.
Conversations can be marked both as favorites or pinned. Pinned conversations will always stay on top, while favorites can be filtered and searched further via the !fav
bang filter.
You can use Ayai to chat with your documents or attach images to your conversations.
By default, when starting a new conversation or attaching a file to an ongoing chat, a summary will be created. You can also enter an optional prompt that will be taken into account.
Currently supported are PDF, docx, all plain text and source code files.
To extract text from docx-files, Ayai will use pandoc if it is installed. Otherwise a crude workaround will be used
Inference Actions provide a suite of language tools for text generation and transformation. These tools enable summarization, clarification, concise writing, and tone adjustment for selected text. They can also correct spelling, expand and paraphrase text, follow instructions, answer questions, and improve text in other ways.
Access a list of all available actions via the Universal Action or by setting the Hotkey trigger.
Tip: Write a detailed prompt directly into the Inference Action filter to customize your instruction or question.
The inference actions are generated from a JSON file called actions.json
, located in the workflow folder. You can customize existing actions or add new ones by editing the file directly or by editing actions.config.pkl
and then evaluating this file with pkl.
Important
Always back up your customized Inference Actions before updating the workflow or your changes will be lost.
A prompt is the text that you give the model to elicit, or "prompt," a relevant output. A prompt is usually in the form of a question or instructions.
The primary configuration setting determines the service that is used for conversations.
If you want to use a third party proxy, define the correlating host
, path
, API key
, model
, and if required the url scheme
or port
in the environment variables.
The variables are prefixed as alternatives to OpenAI, because Ayai expects the returned stream events and errors to mirror the shape of those returned by the OpenAI API.
If you want to use a local language model, define the correlating url scheme
, host
, port
, path
, and if required the model
in the environment variables to establish a connection to the local HTTP initiated and maintained by the method of your choice.
The variables are prefixed as alternatives to OpenAI, because Ayai expects the returned stream events and errors to mirror the shape of those returned by the OpenAI API.
Note: Additional stop sequences can be provided via the shared finish_reasons
environment variable.
Ayai will make sure that the frontmost application accepts text input before streaming or pasting, and will simply copy the result to the clipboard if it does not. This requires accessibility access, which you may need to grant in order to use inference actions. Note: You can override this behaviour with the Safety Exception configuration option. ↩
Third party proxies such as OpenRouter, Groq, Fireworks or Together.ai (see wiki) ↩
Local HTTP servers can be set up with interfaces such as LM Studio or Ollama ↩
There are no models linked
There are no models linked
There are no datasets linked
There are no datasets linked