viasocket
cover.jpg

How to Connect Two Workflows Together

At some point, every automation that works starts to break — not because it stops running, but because it becomes impossible to change.

You needed to add a step. So you added it. Then another. Then a condition. Then a retry. Three months later, the workflow that started as "send a welcome email on signup" now handles email, CRM updates, Slack alerts, plan assignment, and invoice creation. It is 47 steps long. Nobody touches it without a backup plan. One broken node takes down everything.

This is not a workflow problem. This is an architecture problem.

The fix is not a better single workflow. It is to stop thinking in workflows and start thinking in systems — where small, focused flows talk to each other, each one doing one thing, each one replaceable without touching the rest.


#

What It Is

One automation triggers, calls, or hands data off to another automation. Instead of one monolithic flow handling everything, you build a network of small flows connected through defined interfaces.

Same thinking as microservices in backend engineering. Each service owns one responsibility. They communicate through APIs. You update one without redeploying the others. Flow-to-flow is that pattern applied to automation.


#

Where It Makes Sense

Onboarding pipelines. Signup creates the user record. That triggers a flow that sends the verification email. Confirmation triggers plan assignment. Plan assignment triggers a sales team notification. Four flows, four clear owners, each testable without running the others.

Payment processing. Successful charge → update subscription → trigger fulfillment → send receipt. If the receipt flow breaks, fulfillment is unaffected. Failures are isolated, not cascading.

Reusable utility flows. One "send notification" flow that accepts recipient, channel, and message as input — called from twelve other flows instead of rebuilding notification logic in each one. Change the logic once, everywhere benefits.

Multi-team automation. Marketing owns lead scoring. Sales owns CRM updates. They connect through a shared payload. Neither team needs access to the other's internals.


image1.jpg
#

The Three Connection Patterns

Async trigger — fire and forget. Flow A sends an HTTP POST to Flow B's webhook and moves on. Flow B runs independently. Flow A has no visibility into whether Flow B succeeds. Use this when the downstream result does not affect what Flow A is doing.

Sync sub-flow call. Flow A calls Flow B and waits for a response before continuing. Full data continuity, but every sync call in a chain adds latency. A chain of five sync calls at 800ms each means four seconds before anything completes. Default to async. Use sync only when Flow A genuinely cannot continue without Flow B's response.

Event-based handoff. Flow A publishes an event. Multiple flows subscribed to that event pick it up independently. The most decoupled pattern — and the hardest to debug. Use it when multiple independent flows need to react to the same trigger without knowing about each other.


#

Connecting Two Flows — The Practical Steps

This uses the webhook pattern because it works across every platform.

Build the receiving flow first. Give Flow B a webhook trigger. Test it with a static payload in isolation before touching Flow A. If Flow B does not work alone, connecting it to Flow A just hides where the failure is.

Design the payload as a contract. Decide exactly what Flow B needs. Keep it flat. Add a schema_version field from day one — it costs nothing now and saves significant pain when the format changes later.

json

{
  "user_id": "u_8821",
  "plan": "pro",
  "trigger_source": "signup_flow",
  "schema_version": "1.0",
  "correlation_id": "corr_4f92a",
  "timestamp": "2025-03-15T10:42:00Z"
}

Notice correlation_id in there too. A single unique string passed through every flow in the chain. Every log entry, every error, every execution record includes it. When something breaks in Flow C, you search that ID across all your flow logs and see the entire chain in sequence. Without it, you are matching timestamps and guessing.

Add the HTTP POST step to Flow A. At the handoff point, add an HTTP request step pointing to Flow B's webhook. Decide right now: are you waiting for a response or firing and moving on?

Make Flow B idempotent. Network retries happen. Double triggers happen. If Flow B runs twice with the same user_id, it should produce the same result — not create two records. Check if the action was already performed before performing it again.


image2.jpg
#

Error Propagation and Debugging

Flow A triggers Flow B. Flow B fails halfway through. Flow A already finished. Nothing surfaced the error. Three days later someone notices a user never got their plan activated. There is no log connecting the failure to the trigger. Recovery is manual.

The fix: every flow owns its own error state and signals failure explicitly. Build one dedicated error-handling flow. Every other flow calls it on failure — passing the failed flow name, the error, the input payload, and the correlation_id. One place to look. One place to fix.

Debugging across flow boundaries only works if the correlation_id is there from the first trigger. This is not optional infrastructure you add later — retrofitting it means touching every running flow. Add it on day one.


#

When NOT to Split

This is the part most modular automation content skips.

Over-modularization is a real problem. Forty flows for a process that could be eight means forty execution logs to trace through when something breaks. Simple changes touch five flows instead of one.

Do not split if steps are tightly sequential and share heavy state. If Flow A and Flow B are constantly reading and writing the same record back and forth, they belong in one flow. Splitting just means passing fragile state through payloads.

Do not split when latency matters end-to-end. Async chains add real delay. If a user is waiting on a result, three chained async flows might introduce 3–5 seconds of lag. A single flow with all the steps might finish in under a second.

Split when: logic is genuinely reusable elsewhere, different teams own different parts, or a section fails frequently and isolation would contain the blast radius. Not before.


#

Payload Versioning

Six months after launch, ten flows are calling Flow B with a payload contract. You need to add a field and rename another. You update Flow B. Three calling flows break silently because they are still sending the old format.

The schema_version field handles this. Flow B reads the version and routes accordingly — version 1.0 uses the old mapping, version 1.1 uses the new one. Both work during a migration window. You update calling flows one by one. When all are on 1.1, you remove the old handler.

This is API versioning applied to flow payloads. Same problem, same solution. Every major platform supports the conditional routing needed for this — n8n via its Execute Workflow node, Make via router modules, Zapier via filter steps, and viaSocket via conditional branches at the flow entry point — viaSocket flow docs.


image3.jpg
#

Common Mistakes

Passing too much data between flows. Sending the full user object when the receiving flow needs four fields. When the upstream data model changes, every connected flow breaks. Send only what is needed. Treat the payload like a public API.

Sync where async would work. Defaulting to synchronous calls because it feels safer. Three blocking calls in a chain means three single points of failure that can stall the entire process. Default to async, justify sync explicitly.

No correlation ID until something breaks. The moment debugging becomes hard is always after the system is already running. Add it to the first payload you design, not the first incident you debug.

Testing only end-to-end. When the chain breaks, you cannot tell which flow caused it. Test each receiving flow in isolation with a static payload first. End-to-end comes after, not instead of, that.


#

A Real Pipeline

Payment processor confirms a successful charge.

Flow 1 validates the event, extracts user ID and plan, generates a correlation_id, and calls Flow 2 synchronously — plan activation is the core action and must confirm before anything else runs.

Flow 2 updates the user's plan in the database and returns success or failure. If it fails, Flow 1 calls the error handler and stops.

Flow 1 then fires Flow 3 (confirmation email) and Flow 4 (sales Slack notification) asynchronously. Both get the correlation_id. Both can fail without affecting the activation that already completed.

Flow 5 is the error handler — called by any flow in the chain that fails, with the correlation ID, the failed step name, and the original input. One log. One recovery point.

Five flows. Clean ownership. Every one testable in isolation. The chain is debuggable because every execution carries the same ID across all logs.


#

Platform Notes

  • n8n — Native Execute Workflow node with synchronous sub-flow support and error surfacing back to the calling flow. Strongest native implementation. Docs

  • viaSocket — Flows connect via webhook triggers on the receiving end, with conditional branching for payload routing. Docs

  • Make — Cross-scenario calls via HTTP module or webhook trigger. Router module handles version branching cleanly. Docs

  • Zapier — Zap chaining via webhooks or the Trigger Zap step. Less control over sync vs async behavior. Docs

The pattern is identical across all of them. Node names and UI differ, the architecture does not.


1. Split flows when logic is genuinely reusable, team ownership differs, or failure isolation matters — not by default. Over-modularization creates debugging complexity and async chains carry real latency costs that affect users directly.

2. Design every inter-flow payload as a versioned contract from day one — include schema_version and correlation_id in the first payload you build, before you have ten flows depending on it.

3. Error handling is not a flow feature, it is a system feature — build one dedicated error-handling flow that every other flow calls on failure, and let the correlation_id connect every failure back to its origin trigger.