When you trigger a task from your backend code, you need to set the TRIGGER_SECRET_KEY environment variable. You can find the value on the API keys page in the Trigger.dev dashboard. More info on API keys.
Triggers a single run of a task with the payload you pass in, and any options you specify, without needing to import the task.
By using tasks.trigger(), you can pass in the task type as a generic argument, giving you full
type checking. Make sure you use a type import so that your task code is not imported into your
application.
Copy
Ask AI
import { tasks } from "@trigger.dev/sdk/v3";import type { emailSequence } from "~/trigger/emails";// 👆 **type-only** import//app/email/route.tsexport async function POST(request: Request) { //get the JSON from the request const data = await request.json(); // Pass the task type to `trigger()` as a generic argument, giving you full type checking const handle = await tasks.trigger<typeof emailSequence>("email-sequence", { to: data.email, name: data.name, }); //return a success response with the handle return Response.json(handle);}
Triggers multiples runs of a task with the payloads you pass in, and any options you specify, without needing to import the task.
By using tasks.batchTrigger(), you can pass in the task type as a generic argument, giving you
full type checking. Make sure you use a type import so that your task code is not imported into
your application.
Copy
Ask AI
import { tasks } from "@trigger.dev/sdk/v3";import type { emailSequence } from "~/trigger/emails";// 👆 **type-only** import//app/email/route.tsexport async function POST(request: Request) { //get the JSON from the request const data = await request.json(); // Pass the task type to `batchTrigger()` as a generic argument, giving you full type checking const batchHandle = await tasks.batchTrigger<typeof emailSequence>( "email-sequence", data.users.map((u) => ({ payload: { to: u.email, name: u.name } })) ); //return a success response with the handle return Response.json(batchHandle);}
Triggers a single run of a task with the payload you pass in, and any options you specify, and then polls the run until it’s complete.
By using tasks.triggerAndPoll(), you can pass in the task type as a generic argument, giving you
full type checking. Make sure you use a type import so that your task code is not imported into
your application.
Copy
Ask AI
import { tasks } from "@trigger.dev/sdk/v3";import type { emailSequence } from "~/trigger/emails";//app/email/route.tsexport async function POST(request: Request) { //get the JSON from the request const data = await request.json(); // Pass the task type to `triggerAndPoll()` as a generic argument, giving you full type checking const result = await tasks.triggerAndPoll<typeof emailSequence>( "email-sequence", { to: data.email, name: data.name, }, { pollIntervalMs: 5000 } ); //return a success response with the result return Response.json(result);}
The above code is just a demonstration of the API and is not recommended to use in an API route
this way as it will block the request until the task is complete.
Task instance methods are available on the Task object you receive when you define a task. We recommend you use these methods inside another task to trigger subtasks.
This is where it gets interesting. You can trigger a task and then wait for the result. This is useful when you need to call a different task and then use the result to continue with your task.
Instead, use batchTriggerAndWait() if you can, or a for loop if you can’t.
To control concurrency using batch triggers, you can set queue.concurrencyLimit on the child task.
export const parentTask = task({ id: "parent-task", run: async (payload: string) => { const result = await childTask.triggerAndWait("some-data"); console.log("Result", result); //...do stuff with the result },});
The result object is a “Result” type that needs to be checked to see if the child task run was successful:
/trigger/parent.ts
Copy
Ask AI
export const parentTask = task({ id: "parent-task", run: async (payload: string) => { const result = await childTask.triggerAndWait("some-data"); if (result.ok) { console.log("Result", result.output); // result.output is the typed return value of the child task } else { console.error("Error", result.error); // result.error is the error that caused the run to fail } },});
If instead you just want to get the output of the child task, and throw an error if the child task failed, you can use the unwrap method:
You can batch trigger a task and wait for all the results. This is useful for the fan-out pattern, where you need to call a task multiple times and then wait for all the results to continue with your task.
Instead, pass in all items at once and set an appropriate maxConcurrency. Alternatively, use sequentially with a for loop.
To control concurrency, you can set queue.concurrencyLimit on the child task.
When using batchTriggerAndWait, you have full control over how to handle failures within the batch. The method returns an array of run results, allowing you to inspect each run’s outcome individually and implement custom error handling.
Here’s how you can manage run failures:
Inspect individual run results: Each run in the returned array has an ok property indicating success or failure.
Access error information: For failed runs, you can examine the error property to get details about the failure.
Choose your failure strategy: You have two main options:
Fail the entire batch: Throw an error if any run fails, causing the parent task to reattempt.
Continue despite failures: Process the results without throwing an error, allowing the parent task to continue.
Implement custom logic: You can create sophisticated handling based on the number of failures, types of errors, or other criteria.
Here’s an example of how you might handle run failures:
Copy
Ask AI
const result = await batchChildTask.batchTriggerAndWait([ { payload: "item1" }, { payload: "item2" }, { payload: "item3" },]);// Result will contain the finished runs.// They're only finished if they have succeeded or failed.// "Failed" means all attempts failedfor (const run of result.runs) { // Check if the run succeeded if (run.ok) { logger.info("Batch task run succeeded", { output: run.output }); } else { logger.error("Batch task run error", { error: run.error }); //You can choose if you want to throw an error and fail the entire run throw new Error(`Fail the entire run because ${run.id} failed`); }}
When you want to trigger a task now, but have it run at a later time, you can use the delay option:
Copy
Ask AI
// Delay the task run by 1 hourawait myTask.trigger({ some: "data" }, { delay: "1h" });// Delay the task run by 88 secondsawait myTask.trigger({ some: "data" }, { delay: "88s" });// Delay the task run by 1 hour and 52 minutes and 18 secondsawait myTask.trigger({ some: "data" }, { delay: "1h52m18s" });// Delay until a specific timeawait myTask.trigger({ some: "data" }, { delay: "2024-12-01T00:00:00" });// Delay using a Date objectawait myTask.trigger({ some: "data" }, { delay: new Date(Date.now() + 1000 * 60 * 60) });// Delay using a timezoneawait myTask.trigger({ some: "data" }, { delay: new Date("2024-07-23T11:50:00+02:00") });
Runs that are delayed and have not been enqueued yet will display in the dashboard with a “Delayed” status:
Delayed runs will be enqueued at the time specified, and will run as soon as possible after that
time, just as a normally triggered run would.
You can cancel a delayed run using the runs.cancel SDK function:
Copy
Ask AI
import { runs } from "@trigger.dev/sdk/v3";await runs.cancel("run_1234");
You can also reschedule a delayed run using the runs.reschedule SDK function:
Copy
Ask AI
import { runs } from "@trigger.dev/sdk/v3";// The delay option here takes the same format as the trigger delay optionawait runs.reschedule("run_1234", { delay: "1h" });
The delay option is also available when using batchTrigger:
You can set a TTL (time to live) when triggering a task, which will automatically expire the run if it hasn’t started within the specified time. This is useful for ensuring that a run doesn’t get stuck in the queue for too long.
All runs in development have a default ttl of 10 minutes. You can disable this by setting the
ttl option.
Copy
Ask AI
import { myTask } from "./trigger/myTasks";// Expire the run if it hasn't started within 1 hourawait myTask.trigger({ some: "data" }, { ttl: "1h" });// If you specify a number, it will be treated as secondsawait myTask.trigger({ some: "data" }, { ttl: 3600 }); // 1 hour
When a run is expired, it will be marked as “Expired” in the dashboard:
When you use both delay and ttl, the TTL will start counting down from the time the run is enqueued, not from the time the run is triggered.
You can provide an idempotencyKey to ensure that a task is only triggered once with the same key. This is useful if you are triggering a task within another task that might be retried:
Copy
Ask AI
import { idempotencyKeys, task } from "@trigger.dev/sdk/v3";export const myTask = task({ id: "my-task", retry: { maxAttempts: 4, }, run: async (payload: any) => { // By default, idempotency keys generated are unique to the run, to prevent retries from duplicating child tasks const idempotencyKey = await idempotencyKeys.create("my-task-key"); // childTask will only be triggered once with the same idempotency key await childTask.triggerAndWait(payload, { idempotencyKey }); // Do something else, that may throw an error and cause the task to be retried },});
For more information, see our Idempotency documentation.
When you trigger a task you can override the concurrency limit. This is really useful if you sometimes have high priority runs.
The task:
/trigger/override-concurrency.ts
Copy
Ask AI
const generatePullRequest = task({ id: "generate-pull-request", queue: { //normally when triggering this task it will be limited to 1 run at a time concurrencyLimit: 1, }, run: async (payload) => { //todo generate a PR using OpenAI },});
Triggering from your backend and overriding the concurrency:
app/api/push/route.ts
Copy
Ask AI
import { generatePullRequest } from "~/trigger/override-concurrency";export async function POST(request: Request) { const data = await request.json(); if (data.branch === "main") { //trigger the task, with a different queue const handle = await generatePullRequest.trigger(data, { queue: { //the "main-branch" queue will have a concurrency limit of 10 //this triggered run will use that queue name: "main-branch", concurrencyLimit: 10, }, }); return Response.json(handle); } else { //triggered with the default (concurrency of 1) const handle = await generatePullRequest.trigger(data); return Response.json(handle); }}
If you’re building an application where you want to run tasks for your users, you might want a separate queue for each of your users. (It doesn’t have to be users, it can be any entity you want to separately limit the concurrency for.)
You can do this by using concurrencyKey. It creates a separate queue for each value of the key.
Your backend code:
app/api/pr/route.ts
Copy
Ask AI
import { generatePullRequest } from "~/trigger/override-concurrency";export async function POST(request: Request) { const data = await request.json(); if (data.isFreeUser) { //free users can only have 1 PR generated at a time const handle = await generatePullRequest.trigger(data, { queue: { //every free user gets a queue with a concurrency limit of 1 name: "free-users", concurrencyLimit: 1, }, concurrencyKey: data.userId, }); //return a success response with the handle return Response.json(handle); } else { //trigger the task, with a different queue const handle = await generatePullRequest.trigger(data, { queue: { //every paid user gets a queue with a concurrency limit of 10 name: "paid-users", concurrencyLimit: 10, }, concurrencyKey: data.userId, }); //return a success response with the handle return Response.json(handle); }}
We recommend keeping your task payloads as small as possible. We currently have a hard limit on task payloads above 10MB.
If your payload size is larger than 512KB, instead of saving the payload to the database, we will upload it to an S3-compatible object store and store the URL in the database.
When your task runs, we automatically download the payload from the object store and pass it to your task function. We also will return to you a payloadPresignedUrl from the runs.retrieve SDK function so you can download the payload if needed:
We also use this same system for dealing with large task outputs, and subsequently will return a
corresponding outputPresignedUrl. Task outputs are limited to 100MB.
If you need to pass larger payloads, you’ll need to upload the payload to your own storage and pass a URL to the file in the payload instead. For example, uploading to S3 and then sending a presigned URL that expires in URL:
Copy
Ask AI
import { myTask } from "./trigger/myTasks";import { s3Client, getSignedUrl, PutObjectCommand, GetObjectCommand } from "./s3";import { createReadStream } from "node:fs";// Upload file to S3await s3Client.send( new PutObjectCommand({ Bucket: "my-bucket", Key: "myfile.json", Body: createReadStream("large-payload.json"), }));// Create presigned URLconst presignedUrl = await getSignedUrl( s3Client, new GetObjectCommand({ Bucket: "my-bucket", Key: "my-file.json", }), { expiresIn: 3600, // expires in 1 hour });// Now send the URL to the taskconst handle = await myTask.trigger({ url: presignedUrl,});
When using batchTrigger or batchTriggerAndWait, the total size of all payloads cannot exceed 10MB. This means if you are doing a batch of 100 runs, each payload should be less than 100KB.