Skip to main content
Available Interaction Nodes:
  • LLM Node - Directly interact with Large Language Models
  • Agent Node - Delegate tasks to pre-configured AI agents

LLM Node

The LLM Node provides direct access to Large Language Models (GPT-4, Claude, etc.) for text generation, analysis, extraction, and decision-making. It’s the most versatile interaction node, supporting chat history, tool calling, and structured output extraction.
If you have existing pipelines with old LLM node format, see the v2.0.0 Migration Guide for updating to the new System/Task structure.
Add LLM

Purpose

Use the LLM Node to:
  • Generate text based on prompts and context
  • Analyze content and extract insights
  • Extract structured data from unstructured text
  • Have conversations with full chat history support
  • Call tools via function calling
  • Make intelligent decisions based on context

Parameters

ParameterPurposeType Options & Examples
SystemProvide system-level instructions that set the LLM’s behavior, role, or constraintsFixed - Static system message
Example: “You are a helpful assistant.”

F-String - System message with variables
Example: “You are a expert.”

Variable - System message from state
Example: custom_system_prompt
TaskDefine the specific task or user request the LLM should processFixed - Static task
Example: “Summarize the text.”

F-String - Task with embedded variables
Example: “Analyze and extract requirements.”

Variable - Task from state variable
Example: task_instruction
Chat HistoryProvide conversation context from previous interactionsFixed - No history
Example: []

F-String - Formatted conversation history
Example: “Previous conversation:

Variable - Use conversation history
Example: messages
InputSpecify which state variables the LLM node reads fromDefault states: input, messages
Custom states: Any defined state variables

Example:
- input
- messages
- user_context
OutputDefine which state variables the LLM’s response should populateDefault: messages
Custom states: Define specific variables

Example:
- extracted_title
- extracted_description
- messages
ToolkitsBind external tools and MCPs to the LLM for function callingToolkits - Service integrations
MCPs - Model Context Protocol servers

Example:
jira_toolkit:
- create_issue
- update_issue
slack_toolkit:
- send_message
Interrupt BeforePause pipeline execution before this nodeEnabled / Disabled

Example: enabled or disabled
Interrupt AfterPause pipeline execution after this node for inspectionEnabled / Disabled

Example: enabled or disabled
Structured OutputForce LLM to return data in structured format matching output variablesEnabled - Response parsed into state variables
Disabled - Free-form text to messages

Example: true or false
LLM Node Interface Yaml Configuration
nodes:
  - id: Analyze_feedback
    type: llm
    prompt:
      type: string
      value: ''
    input:
      - input
      - user_context
    output:
      - extracted_title
      - messages
    structured_output: false
    transition: END
    input_mapping:
      system:
        type: fixed
        value: You are a helpful assistant
      task:
        type: fstring
        value: Analyze {user_story} and extract requirements.
      chat_history:
        type: variable
        value: messages
    tool_names:
      JiraAssistant:
        - create_issue
        - update_issue
interrupt_before:
  - Analyze_feedback
state:
  messages:
    type: list
  input:
    type: str
  extracted_title:
    type: str
    value: ''
  user_context:
    type: str
    value: ''
When using structured output with interrupts, include messages in the output variables for meaningful interrupt output.
  1. Select Toolkit/MCP from dropdown
  2. Tool dropdown appears for that toolkit
  3. Select specific tools to make available to the LLM
  4. Repeat for multiple toolkits (each gets its own tool dropdown) alt text

Best Practices

1. Always Include messages in Output for Interrupts

When using structured output with interrupts: Correct:
output: ["extracted_data", "status", "messages"]
structured_output: true
Avoid:
output: ["extracted_data", "status"]  # Missing messages
structured_output: true

2. Use Appropriate Prompt Types

  • Fixed: For static, unchanging instructions
  • F-String: When you need to inject specific state variables
  • Variable: When the entire prompt comes from state

3. Limit Tool Binding

Only bind tools the LLM actually needs: Good: Select specific relevant tools
toolkits:
  jira_toolkit:
    - create_issue
    - update_issue
Avoid: Binding all tools unnecessarily
toolkits:
  jira_toolkit:
    - [all 50+ tools selected]  # Confuses LLM

4. Structure Your Prompts

Use clear, structured prompts: Good:
prompt:
  type: "fstring"
  value: |
    ## Task
    Analyze the user story: {user_story}
    
    ## Requirements
    Extract:
    1. Title
    2. Description
    3. Acceptance Criteria
    
    ## Output Format
    Provide structured data for each field.
Avoid: Vague prompts
prompt:
  type: "fixed"
  value: "Do something with the data"

5. Specify Output Variables Clearly

Match output variables to what you’re extracting: Good:
output: ["jira_project_id", "epic_id", "user_story_title", "messages"]
structured_output: true

6. Use Chat History Wisely

  • Include messages in input when context matters
  • Use [] (empty array) for stateless single-turn requests

7. Test with Interrupts

Use interrupts during development to verify LLM outputs:
interrupt_after: true  # Pause after this node to inspect results

8. Handle Tool Calling Errors

When using toolkits, account for potential tool failures in your prompt:
prompt:
  type: "fixed"
  value: |
    Use available tools to complete the task.
    If a tool fails, explain what went wrong and suggest alternatives.

Agent Node

The Agent Node allows you to delegate tasks to pre-configured AI agents that have been added to your pipeline. Instead of configuring LLM behavior from scratch, you leverage existing agents with specialized capabilities, prompts, and toolkits. Add Agent

Purpose

Use the Agent Node to:
  • Delegate complex tasks to specialized agents
  • Reuse existing agents across multiple pipelines
  • Maintain consistency with pre-configured agent behavior
  • Simplify workflows by avoiding duplicate LLM configuration
  • Leverage agent-specific toolkits and integrations

Parameters

ParameterPurposeType Options & Examples
AgentSelect which pre-configured agent to executeOnly agents added to the pipeline appear in dropdown

How to Add:
1. Go to Pipeline Configuration > Toolkits
2. Select agents
3. Added agents become available

Example: jira_assistant_agent
InputSpecify which state variables the agent reads fromDefault states: input, messages
Custom states: Any defined state variables

Example:
- project_id
- input
OutputDefine which state variables the agent’s response should populateDefault: messages
Custom states: Specific variables

Example:
- jira_ticket_id
- ticket_url
- messages
Task (Input Mapping)Map the specific task instruction for the agentFixed - Static task
Example: “Create a Jira ticket for this issue.”

F-String - Task with variables
Example: “Create Jira ticket in :

Variable - Task from state
Example: task_instruction
Chat History (Input Mapping)Map conversation context to provide to the agentFixed - No history
Example: []

F-String - Formatted history
Example: “Previous context:

Variable - Use conversation history
Example: messages
Custom Variables (Input Mapping)Map agent-specific custom variables if definedFixed - Static value
F-String - Value with variables
Variable - Value from state

Example (if agent has jira_project):
jira_project: PROJ-123
Interrupt BeforePause pipeline execution before this nodeEnabled / Disabled

Example: enabled or disabled
Interrupt AfterPause pipeline execution after this node for inspectionEnabled / Disabled

Example: enabled or disabled
Agent Node Interface Yaml Configuration
nodes:
  - id: Agent 1
    type: agent
    input:
      - input
      - project_id
    output:
      - jira_ticket_id
      - ticket_url
      - messages
    transition: END
    input_mapping:
      task:
        type: fstring
        value: Create Jira ticket in {project_id}
      chat_history:
        type: fixed
        value: []
      project_id:
        type: variable
        value: project_id
    tool: Jiraepam
interrupt_before:
  - Agent 1
state:
  messages:
    type: list
  input:
    type: str
  project_id:
    type: str
    value: ''
  jira_ticket_id:
    type: str
    value: ''
  ticket_url:
    type: str
    value: ''
The Input Mapping section appears after you select an agent. Every agent includes TASK and CHAT_HISTORY mappings. If the agent has custom variables, they also appear as mapping options.
Before using Agent Node, ensure agents are added in Pipeline Configuration > Toolkits section. Only added agents appear in the dropdown.

Best Practices

1. Add Agents to Pipeline First

Ensure the agent is added in Pipeline Configuration > Toolkits section before using Agent Node.

2. Map Task Clearly

Provide clear, specific task instructions: Good:
task:
  type: "fstring"
  value: "Create Jira ticket in project {project_id} with title '{title}', description '{description}', and priority {priority}."
Avoid:
task:
  type: "variable"
  value: "input"  # Too vague

3. Use Chat History Appropriately

  • With History: Use when agent needs conversation context
  • Without History: Use [] for independent, stateless tasks

4. Map Custom Variables Correctly

If agent has custom variables, map them to pipeline state: Good:
input_mapping:
  jira_project:
    type: "variable"
    value: "project_id"
  sprint_number:
    type: "variable"
    value: "current_sprint"

5. Include messages in Output

For debugging and continuity, include messages:
output: ["agent_result", "messages"]

6. Use Interrupts for Testing

Test agent behavior with interrupts:
interrupt_after: true  # Review agent output

7. Reuse Agents Across Pipelines

Create specialized agents once, reuse in multiple pipelines for consistency.

8. Handle Agent Failures

Consider error handling in subsequent nodes:
- id: "check_agent_result"
  type: "router"
  condition: "agent_result is not None"
  routes: ["success_path", "error_path"]

Interaction Nodes Comparison

FeatureLLM NodeAgent Node
PurposeDirect LLM interaction with full controlDelegate to pre-configured specialized agents
ConfigurationConfigure prompt, system, task from scratchUse existing agent configuration
PromptDefine in node (Fixed/F-String/Variable)Inherited from agent, customized via Task mapping
ToolkitsSelect Toolkits & MCPs, choose specific toolsAgent’s toolkits are pre-configured
InputState variables (input, messages, custom)State variables (input, messages, custom)
OutputState variables (messages, custom)State variables (messages, custom)
Input MappingNot applicable (direct state access)Map pipeline state to agent parameters (Task, Chat History, custom variables)
Structured OutputSupported (enable via toggle)Depends on agent configuration
Conversation HistoryControlled via Chat history parameterControlled via CHAT_HISTORY mapping
ReusabilityNode-specific configurationAgent can be reused across pipelines
FlexibilityHighly flexible, configure everythingLimited to agent’s design
ComplexityMore setup requiredSimpler, leverages existing agent
Use CaseCustom tasks, one-off requests, full controlSpecialized tasks, consistent behavior, reusable workflows

When to Use LLM Node

Choose LLM Node when you need:
  • Full control over prompts and behavior
  • Custom tool binding for specific workflow
  • One-off or unique LLM interactions
  • Structured output extraction
  • Simple text generation without pre-configuration

When to Use Agent Node

Choose Agent Node when you:
  • Have an existing agent that does exactly what you need
  • Want to reuse agent logic across multiple pipelines
  • Need consistent behavior from pre-configured agents
  • Want to simplify pipeline by delegating to specialists
  • Have agents with specific domain knowledge or toolkits
You can use both node types in the same pipeline:
  • LLM Node for custom, ad-hoc processing
  • Agent Node for specialized, reusable tasks