Human workers are faster and more accurate for most tasks — but they're finite. Guild workers sleep, go offline, and have peak hours. When no worker accepts a task within your timeout window, our AI fallback fires automatically so your pipeline never stalls.
The 5-minute default
By default, the AI fallback threshold is 5 minutes (api_fallback_timeout_min: 5). You can set this to any value when creating a task — 1 minute for latency-critical pipelines, 60 minutes for batch work where human accuracy matters more than speed.
await og.tasks.create({
type: 'judgment',
guild: 'general',
payload: { question: 'Is this content appropriate for all ages?' },
api_fallback_timeout_min: 2, // fire AI after 2 min
include_completion_source: true,
})
What the AI agent actually does
When the fallback fires, a Claude agent is spawned with a prompt constructed from the task's schema and payload. The agent returns a structured result in the same format as a human worker — a label, annotation, judgment, or ranking — along with a confidence score.
That result enters the same consensus pipeline as human results. If the AI's confidence is high and consistent with any prior human attempts, the task completes. If not, it re-queues.
human_required: true
For tasks where AI completion is not acceptable — RLHF preference ranking, medical image annotation — you can set human_required: true. The AI fallback will never fire for these tasks. If no human completes within 24 hours, the task fails and you are not charged.
Two task types are always human-required regardless of this flag: rlhf tasks (where the point is to capture human preference, not AI preference) and judgment tasks in the medical guild (where regulatory context demands licensed human review).
Transparency via completion_source
By default, we don't surface whether a task was completed by a human or an AI — the result is the result. If your application or downstream model needs to know, set include_completion_source: true and the response will include a completion_source field: 'human' or 'ai'.
The training flywheel
Every human completion is exported nightly as a training pair — the task payload and the human-verified result. Over time, this data makes the AI fallback progressively better calibrated to the kinds of tasks OpenGuilds handles. The humans teach the AI, and the AI covers for the humans. That's the loop.