RunStack is a visual workflow builder for Large Language Model (LLM) orchestration. It enables developers and teams to design, test, and deploy complex AI workflows without writing brittle orchestration code. It solves the problem of slow iteration and fragile production systems by providing a visual interface and a simple API.
Freemium
from $19/month
How to use RunStack?
Users design multi-step LLM workflows visually using a drag-and-drop interface. They can connect different AI models (like GPT-4, Claude, Gemini), define logic, and create parallel variants for A/B testing. Once designed, workflows are executed via a simple REST API with just two endpoints. This allows for rapid prototyping, testing different prompts and models side-by-side, and deploying updates instantly without touching production code.
RunStack 's Core Features
Visual Workflow Builder: A no-code, drag-and-drop interface to design complex LLM pipelines involving multiple models, conditional logic, and data processing steps.
Safe Iteration & Instant Updates: Deploy new workflow versions with two clicks. Update prompts, models, or logic instantly without redeploying your main application code.
Built-in A/B Testing (Variants): Natively test different prompts, models, and parameters side-by-side within the same workflow stage to compare performance and outputs.
Simple Execution API: Run any published workflow through a clean, consistent API with only two primary endpoints (/run and /results), simplifying integration.
Multi-Model Support: Seamlessly integrate and orchestrate calls to various LLM providers like OpenAI (GPT-4), Anthropic (Claude), Google (Gemini), and Mistral within a single workflow.
Concurrent Execution & Scaling: Plans support multiple concurrent workflow runs, API keys, and higher rate limits to handle production-level traffic and team usage.
RunStack 's Use Cases
For sales teams automating lead enrichment: Create a workflow that takes a list of prospects, uses different LLMs to research company details and generate personalized outreach talking points, then outputs a refined list.
For content marketers generating and optimizing copy: Design a workflow that brainstorms topics with one model, writes drafts with another, and uses a third to critique and suggest improvements, all in one automated sequence.
For customer support teams building intelligent triage systems: Build a workflow that classifies incoming support tickets using an LLM, routes them to appropriate knowledge bases or human agents, and drafts initial responses.
For developers prototyping AI features: Rapidly test different model combinations and prompt engineering strategies for a new application feature without writing and rewriting complex orchestration logic.
For data analysts processing unstructured text: Create a workflow to extract entities, summarize findings, and perform sentiment analysis on batches of documents using specialized LLMs for each task.
RunStack 's Pricing
Free
$0/month
Forever free plan with basic features, 50 execution credits/month, 1 concurrent run, 1 API key, and 1 req/sec rate limit.
Starter
$19/month
Popular plan for individuals and small projects. Includes 1000 execution credits/month, 2 concurrent runs, 3 API keys, and 3 req/sec rate limit.
Pro
$49/month
For professionals with custom models. Includes 3000 execution credits/month, 5 concurrent runs, 10 API keys, and 10 req/sec rate limit.
Scale
$99/month
For teams with high volume needs. Includes 8000 execution credits/month, 10 concurrent runs, 25 API keys, and 30 req/sec rate limit.