In Development: 11.1

Cerb (11.1) is a feature upgrade in development as of April 02, 2025. It includes more than 34 new features and improvements from community feedback.
To check if you qualify for this release as a free update, view Setup » Configure » License. If your software updates expire on or after March 31, 2025 then you can upgrade without renewing your license. Cerb Cloud subscribers will be upgraded automatically.
Important Release Notes
-
Cerb 11.1 requires PHP 8.2+ and MySQL 8.0+ (or MariaDB 10.5+).
-
To upgrade your installation, follow these instructions.
-
NOTE: The branches in the
cerb/cerb-release
repository have changed. Themaster
branch has been removed, and new branches exist for each major version (e.g.v11.0
,v11.1
).
Added
-
[Automations/LLM] In automations, added an llm.agent: command to simplify integrations with generative large language models (LLMs). The agent automatically manages history and tool use. Tools can be created using
llm.tool
automations with automatic documentation based oninputs:
. Thellm.agent:
command will automatically invoke tools and return their output to the model. Integration is included for Anthropic, OpenAI, Groq, and Ollama. -
[Automations/LLM] In the
llm.agent:
automation command, non-automationtools:
can be defined as typetool:
with a description and parameters. When these tools are used by a model, theon_tool:
event is triggered with custom logic and a__tool
placeholder object with keysid
,name
, andparameters
. A result must be returned using thetool.return:
command. For example, this can useawait:interaction:
to delegate to any interaction, or to implement "human in the loop" manual approval workflows. -
[Automations/LLM] Added an llm.tool automation trigger. A collection of these tools can be provided to an
llm.agent:
using a large language model (LLM) that supports function calling (tool use). The inputs of anllm.tool
automation are used to automatically generate the JSON Schema expected by common models. For instance, tools can provide access to Cerb data or integrate with third party APIs. -
[Interactions/Worker] In worker interactions, added a new
llmTranscript
form element. In two lines of code, this automatically handles rendering a chat history from an LLM session ID, with copy-to-clipboard, thumbs up/down ratings, etc. Previously, this had to be tediously implemented using asheet
element. -
[Interactions/Website] In website interactions, added a new
llmTranscript
form element. In two lines of code, this automatically handles rendering a chat history from an LLM session ID, with copy-to-clipboard, thumbs up/down ratings, etc. Previously, this had to be tediously implemented using asheet
element. -
[Automations/LLM] In automations, the
llm.agent:
command now returns asession_id
to its output key. This can be used with thellmTranscript
form element to render the chat history. -
[Automations/LLM] In automations, refactored the
llm.agent:
command so all tool use (including automations) executes theon_tool:
branch. The same__tool
placeholders are available as for custom tools. This allows real-time updates to be returned to the user during tool use; resetting the 30-second HTTP request time limit. Long-running deep research is now possible. -
[Devblocks/Platform/LLM] Added an LLM service to the Devblocks platform. This interfaces with Large Language Model APIs for generative text (e.g. Q&A, summarization, classification, report generation, email drafts).
-
[Platform/LLM/Ollama] Added an
ollama
provider to the LLM service. This can connect to locally hosted open-source large language models through the Ollama API. -
[Platform/LLM/OpenAI] Added an
openai
provider to the LLM service. This can connect to hosted large language models through the OpenAI API (e.g.gpt-4o
). A customizableapi_endpoint_url
also allows the use of other compatible services. -
[Platform/LLM/Groq] Added a
groq
provider to the LLM service. This connects to fast, hosted, open-source large language models through the Groq API. -
[Platform/LLM/Anthropic] Added an
anthropic
provider to the LLM service. This connects to hosted large language models through the Anthropic API (e.g.claude-3-5-sonnet-20241022
). -
[Platform/LLM/HuggingFace] Added a
huggingface
provider to the LLM service. This uses the Serverless Inference API by default, but Inference Endpoints are also supported. -
[Platform/LLM/TogetherAI] Added a
together
provider to the LLM service. This uses the together.ai inference cloud service. -
[Automations/LLM] Added an
llm.embedding
automation trigger. This interfaces with large language model APIs to produce vector embeddings for blocks of text. For instance, embedding knowledgebase articles and FAQs for semantic search. -
[Automations/LLM] Added an llm.embed: command to automations. This runs
llm.embedding
automations to produce vector embeddings for blocks of text. For instance, embedding knowledgebase articles and FAQs for semantic search. -
[Automations/Inputs] In automations, each input can now set an optional
description:
key. This is used for documentation, autocompletion, and JSON Schema in LLM tool use. -
[Automations/Inputs] In automations,
text:
inputs can now set an optionalallowed_values:
key to create picklists. This is used for documentation, autocompletion, and JSON Schema in LLM tool use. -
[Devblocks/Platform/LLM] Added a 'DatabaseHistory' memory module to the LLM service for managing large language model chat history + context.
-
[Automations/LLM] In automations, llm.agent: transcripts are now viewable in Setup->Developers->LLM Agent Transcripts. Transcripts can be marked read or deleted. When an agent uses a tool, its parameters and results are shown in the transcript. This makes debugging and compliance much easier. It also provides valuable feedback for improving agents based on past user interactions.
-
[Automations/LLM] In LLM Transcripts, a 'Permalink' button provides a direct link for sharing transcripts.
-
[Automations/LLM] When viewing LLM agent chat transcripts, metadata is displayed at the top (e.g. provider, user, client IP, automation, and automation node).
-
[Automations/LLM] When viewing LLM agent chat transcripts, added a 'Copy' button to copy the plaintext (Markdown) content from a message to the browser clipboard.
-
[Interactions/Website] In interaction.website automations,
submit:
form elements can now specify the widthsize:
of each button: whole, half, third, and quarter. -
[Interactions/Website] In interaction.website automations,
say:
form elements can now specify severalstyles:
options: text-center, text-large, text-left, text-right, or text-small. -
[Worklists/Search/Performance] Improved the performance of many complex search queries by running some subqueries independently and merging by IDs. This is enabled by default and can be disabled from the
APP_OPT_SQL_SUBQUERY_TO_IDS
configuration option. -
[Interactions/Website] In website interactions, the
submit:
form element has a newis_automatic@bool
option. Whentrue
, the form is automatically submitted after it is rendered. This is particularly useful before a time-intensive operation like LLM text generation, which will instantly transition to a waiting spinner. -
[Interactions/Worker] In worker interactions, the
submit:
form element has a newis_automatic
option. This is now preferred toawait:duration:
since it can render other form elements during the wait (e.g. say, sheet, LLM transcript).
Changed
-
[Portals/Interactions] In 'Website Interaction' portals and on external websites, interactions now start in 'full' mode and use a blur overlay on the underlying website. The previous 'popup' default style was difficult to see on some website themes.
-
[Worklists/Fieldsets] In worklists, improved the performance of the
fieldset:
filter. -
[Interactions/Website] In website interactions, the
sheet:
form element now applies styling to code blocks inmarkdown
columns. -
[Setup/Plugins] In the setup page, moved 'Plugins' from its own menu into the 'Configure' menu. This menu had a single item, and we can use the space for something else.
-
[Platform/Markdown] In Devblocks, changed the Markdown parsing library from Parsedown to League\CommonMark. Parsedown has some rendering issues and appears to have been abandoned since 2019.
-
[Interactions/Worker] In worker interactions, continuing no longer hides the current step and only hides the submit button. The spinner is displayed in its place. This allows the auto-submit functionality to show progress messages.