Concurrency

· updated 2026-05-16

CLEAR uses cooperative fibers for concurrency.

Fibers are lightweight threads managed by the CLEAR runtime — each has its own stack (4-256KB) and runs until it yields. No OS threads are created per fiber; the scheduler multiplexes fibers onto a thread pool.

Note

CLEAR will support limited Finite State Machines in v0.2, and by v0.4 - full support for Finite State Machines is planned.

BG — Background Fibers

Spawn a fiber that runs concurrently and returns a Promise:

p = BG { expensive_computation(data); };
# ... do other work ...
result = NEXT p;  # block until the fiber finishes

The last expression in the body becomes the promise's value. NEXT consumes the promise and returns the value.

Note

By v0.4 Finite State Machines will be the default, and you will opt-in to stack-based fibers for re-entrant tasks or tasks that use FFI.

Modifiers

Modifiers go inside the braces, before a ->:

BG { @large:pinned -> heavy_work(); }
ModifierEffect
@micro4 KB fiber stack
@standard16 KB fiber stack (default)
@large64 KB fiber stack
@xl256 KB fiber stack
@serviceOS thread (not a fiber). Full OS stack. For heavy non-cooperative compute.
@pinnedPin to local scheduler (no work stealing)
@parallelDistribute to least-loaded scheduler
@arenaThread-local arena allocation; only works with @pinned tasks

Combine with :@large:@arena gives a large stack with arena allocation.

By v0.2 @stateMachine and @stack will be options. The compiler will warn you when you should use a state machine, and block you from using it when it's not an option.

@service — OS Thread Spawning

@service spawns a dedicated OS thread instead of a green fiber. Use it for heavy-compute tasks that won't cooperatively yield - the OS handles preemption.

p = BG { @service ->
    trainModel(dataset);  # runs on its own OS thread
};
# fiber continues concurrently
result = NEXT p;  # blocks fiber until OS thread finishes

The OS thread gets its own Runtime with a 64 KB frame arena. No scheduler is involved - the thread runs independently until completion, then signals the Promise.

When to use @service vs fibers:

Captures

BG blocks capture outer variables by value (moved, not borrowed):

x = 42.0;
p = BG { x + 1.0; };  # x is moved into the fiber
# x is no longer usable here (affine ownership)

THEN Chains

Chain sequential steps inside a single fiber:

result = BG {
    fetch("https://api.example.com/data") AS response
      THEN parse(response) AS parsed
      THEN transform(parsed);
};

Each AS name binds the result for subsequent steps. The last step's value becomes the promise result.

This is equivalent to:

result = BG {
    response = NEXT fetch("https://api.example.com/data");
    parsed = NEXT parse(response);
    transform(parsed);
};

THEN is mostly useful for short-chained tasks, especially in a complex pipeline:

result = BG {
    fetch("https://api.example.com/data1") AS r THEN parse(r),
    fetch("https://api.example.com/data2") AS r THEN parse(r)
};
# result = ~T[] -> this is only valid when all T are the same.

Error handling: use OR before the AS binding. If a step returns !T, handle it inline:

# ILLUSTRATIVE
result = BG {
    fetch(url) OR RAISE           # propagate error to caller
      AS response THEN parse(response) OR default_value
      AS parsed THEN transform(parsed);
};

DO — Fork-Join

Execute multiple branches concurrently, wait for all to complete:

# ILLUSTRATIVE
DO {
    update_database(record),
    send_notification(user),
    log_event(event)
}
# All three are done here.

Branches are separated by commas. Each runs in its own fiber. The DO block waits for all branches before continuing. Returns Void.

Branch Modifiers

Each branch can have its own modifiers:

# ILLUSTRATIVE
DO {
    @large -> heavy_computation(),
    @pinned -> cache_local_work()
}

NEXT — Promise Resolution

Block until a promise resolves:

p: ~Float64 = BG { 42.0; };
result: Float64 = NEXT p;
Promise TypeNEXT returnsBehavior
~TTConsumes the promise (one-shot)
~T @sharedTReturns cached result (safe for multiple NEXT)
~T[?]?TReturns next value or nil (open stream)
~T[INF]TReturns next value, never nil (infinite stream)

BG STREAM — Generators

Spawn a fiber that yields values over time:

# Open stream (finite)
s: ~Float64[?] = BG STREAM {
    YIELD 1.0;
    YIELD 4.0;
    YIELD 9.0;
};
v1 = NEXT s;  # 1.0
v2 = NEXT s;  # 4.0
v3 = NEXT s;  # 9.0
v4 = NEXT s;  # NIL (exhausted)

# Infinite stream
counter: ~Float64[INF] = BG STREAM {
    MUTABLE i = 0.0;
    WHILE TRUE DO
        YIELD i;
        i = i + 1.0;
    END
};
v1 = NEXT counter;  # 0.0
v2 = NEXT counter;  # 1.0 (blocks until generator yields)

CONCURRENT — Parallel Pipelines

Apply pipeline operators in parallel with a persistent worker pool:

# ILLUSTRATIVE
results = items |> CONCURRENT(workers: 8) SELECT transform(_);
filtered = items |> CONCURRENT(workers: 4) WHERE predicate(_);
items |> CONCURRENT(workers: 2) EACH { _.value = 0.0; };

Options

OptionTypeDefaultEffect
workersNumber8Number of persistent worker fibers
batchNumber1Items each worker pulls per fetch from the shared index (larger = less index contention)
capacityNumberauto (worker count rounded up to a power of 2, clamped 4-64)Bounded channel ring-buffer size; only valid with a streaming / SHARD / range source
parallelBoolFALSETRUE = distribute workers across schedulers (multi-core)
sizeIdentifierSTANDARDStack size: MICRO, STANDARD, LARGE, XL

Supported Operators

Error Handling

# Skip failed items
results = items
  |> CONCURRENT(workers: 4) SELECT risky_fn(_) OR PRUNE;

# Propagate first error
results = items
  |> CONCURRENT(workers: 4) SELECT risky_fn(_) OR RAISE;

How It Works

CONCURRENT spawns N persistent worker fibers that pull items from a shared atomic index. Zero per-item allocation — workers reuse their stack and context across all items. This is fundamentally different from spawning one fiber per item (which is what BG does).

results = items
  |> SELECT BG { risky_fn(_) OR RAISE };
# this is valid, but it would spawn N tasks, all at once.  It is NOT recommended.
Default (workers on local scheduler):
  CONCURRENT(workers: 8) SELECT transform(_)
  → 8 fibers on one thread, cooperative scheduling

Multi-core (workers distributed):
  CONCURRENT(workers: 8, parallel: TRUE) SELECT transform(_)
  → 8 fibers spread across all schedulers

Performance

PatternPer-item costUse when
CONCURRENT(workers: N)~0 (atomic fetchAdd only)Bulk processing, batch transforms
Individual BG { }~60μs (GPA alloc)Dynamic spawning, I/O-bound tasks

Multi-Threading

CLEAR defaults to single-threaded scheduling. Set CLEAR_THREADS to enable multi-core:

CLEAR_THREADS=0 ./my_program   # auto-detect CPU count
CLEAR_THREADS=4 ./my_program   # 4 scheduler threads
CLEAR_THREADS=1 ./my_program   # single-threaded (default)

Each scheduler thread runs its own event loop with work stealing. Idle schedulers steal tasks from busy ones. @pinned tasks are exempt from stealing.

Safety Rules

The compiler enforces these at compile time:

RuleReason
@parallel + @local = error@local has no synchronization
@parallel + @multiowned = errorRc is non-atomic; use @shared (Arc)
@arena + @parallel = errorArena memory is thread-local
Non-@pinned BG capturing from @pinned scope = errorThread-local memory would escape
YIELD outside BG STREAM = errorYIELD only valid in generators
NEXT on non-promise = errorCan only await promises and streams

When to Use What

GoalConstruct
Process a batch of items|> CONCURRENT(workers: N) SELECT/WHERE/EACH
Fire off a background taskBG { work(); }
Run independent tasks concurrentlyDO { task1(), task2(), task3() }
Generate values lazilyBG STREAM { YIELD ...; }
Chain async stepsBG { step1() AS r THEN step2(r) }
Handle requests (server)BG { @pinned:@arena -> handle(conn); }

Known Limitations (v0.1)

Dynamic spawn overhead: spawning many individual BG tasks is competitive with Go at moderate thread counts (roughly parity at 8 threads on the 100K-spawn benchmark) but slower at high core counts — about 1.6x Go at small scale, up to ~3.5x at 32 threads. The remaining gap is idle schedulers spinning on the work-stealing deque instead of parking, not per-spawn allocation. Workaround: use CONCURRENT(workers: N) for bulk workloads. Scheduler parking (the fix) is tracked for post-v0.1.

No general bounded channel: CLEAR has Promises (one-shot) and Streams (unbounded). CONCURRENT streaming uses an internal bounded ring-buffer channel for backpressure (tunable via the capacity: option), but there is no general user-facing bounded producer/consumer channel primitive yet.

Source: docs/concurrency.md