Reference
7 min read

Log Drains Reference

Learn about Log Drains types and sources.
Table of Contents

Log Drains allow you to collect logs from your deployments. To enable Log Drains, you must provide a destination URL to send the logs to. This URL is provided by the provider that ingests the Log Drains.

Vercel sends logs to destination URLs over HTTPS, HTTP, TLS, or TCP every time logs are generated.

Vercel supports three different types of Log Drains:

When you choose the json type, the URL receives a HTTPS or HTTP POST request with a JSON array on the POST body.

If the response of the request returns an HTTP statusCode with a value of -1, that means there was no response returned and the lambda crashed. In the same response, if the value of proxy.statusCode is returned with -1, that means the revalidation occurred in the background.

The logs are buffered and submitted as batches with the following formats:

example-url
 
[
  {
    "id": <identifier>,
    "message": <Log messages that push the log over 4 KB can be truncated to only show tail>,
    "timestamp": <timestamp>,
    "type": <"stdout" or "stderr">,
    "source": <"build", "static", "external", or "lambda">,
    "projectId": <identifier of project>,
    "deploymentId": <identifier of deployment>,
    "buildId": <identifier of build>,
    "host": <deployment unique url hostname>,
    "entrypoint": <entrypoint>
  },
  {
    "id": <identifier>,
    "message": <Log messages that push the log over 4 KB can be truncated to only show tail >,
    "timestamp": <timestamp>,
    "requestId": <identifier of request>,
    "statusCode": <HTTP status code of request>,
    "source": <"build", "static", "external", or "lambda">,
    "projectId": <identifier of project>,
    "deploymentId": <identifier of deployment>,
    "buildId": <identifier of build only on build logs>,
    "destination": <origin of external content only on external logs>,
    "host": <deployment unique url hostname>,
    "path": <function or the dynamic path of the request>,
    "executionRegion": <region where the request is executed>,
    "level": <"error", "warning", or "info">,
    "proxy": {
      "timestamp": <timestamp of proxy request>,
      "method": <method of request>,
      "scheme": <protocol of request>,
      "host": <alias hostname if exists>,
      "path": <request path>,
      "userAgent": <user agent>,
      "referer": <referer>,
      "statusCode": <HTTP status code of proxy request>,
      "clientIp": <client IP>,
      "region": <region request is processed>,
      "cacheId": <original request id when request is served from cache>,
      "vercelCache": <the X-Vercel-Cache value sent to the browser>
    }
  }
]
 

The requests are posted with a x-vercel-signature header which contains a hash signature you can use to validate the request body. See the Securing your Log Drains section to learn how to verify requests.

When you choose the ndjson type, the URL receives a HTTPS or HTTP POST request with JSON objects delimited by newline (\\n) on the POST body. See the ndjson npm package for more information on the structure.

Each request receives HTTP headers including x-vercel-signature.

The following are two example POST bodies:

ndjson-post-body
{"id": "1573817187330377061717300000","message": "done","timestamp": 1573817187330,"type": "stdout","source": "build","projectId": "abcdefgdufoJxB6b9b1fEqr1jUtFkyavUURbnDCFCnZxgs","deploymentId": "dpl_233NRGRjVZX1caZrXWtz5g1TAksD","buildId": "bld_cotnkcr76","host": "*.vercel.app","entrypoint": "api/index.js"}
{"id": "1573817250283254651097202070","message": "START RequestId: 643af4e3-975a-4cc7-9e7a-1eda11539d90 Version: $LATEST\\n2019-11-15T11:27:30.721Z\\t643af4e3-975a-4cc7-9e7a-1eda11539d90\\tINFO\\thello\\nEND RequestId: 643af4e3-975a-4cc7-9e7a-1eda11539d90\\nREPORT RequestId: 643af4e3-975a-4cc7-9e7a-1eda11539d90\\tDuration: 16.76 ms\\tBilled Duration: 100 ms\\tMemory Size: 1024 MB\\tMax Memory Used: 78 MB\\tInit Duration: 186.49 ms\\t\\n","timestamp": 1573817250283,"source": "lambda","requestId": "894xj-1573817250172-7847d20a4939","statusCode": 200,"proxy": {"timestamp": 1573817250172,"path": "/dynamic/some-value.json?route=some-value","userAgent": ["Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.97 Safari/537.36"],"referer": "*.vercel.app","method": "GET","scheme": "https","host": "test.vercel.app","statusCode": 200,"clientIp": "120.75.16.101","region": "sfo1"},"projectId": "abcdefgdufoJxB6b9b1fEqr1jUtFkyavUURbnDCFCnZxgs","deploymentId": "dpl_233NRGRjVZX1caZrXWtz5g1TAksD","host": "test-3i9jacdr-team-name.vercel.app","path": "/dynamic/[route].json"}
Deprecated:

Syslog is not supported in configurable log drains. This is a deprecated feature for integrations.

When you choose the syslog type, the URL is connected with TLS or TCP. Log Drain messages are formatted according to RFC5424 framed using octet counting defined in RFC6587.

Syslog messages resemble the following:

syslog-message
 
425 <142>1 2019-11-15T11:42:22.562Z *.vercel.app now proxy - [proxy@54735 requestId="q8k4w-1573818142562-9adfb40ce9d4" statusCode="200" method="GET" path="/api" userAgent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.97 Safari/537.36" referer="*.vercel.app" clientIp="120.75.16.101" region="sfo1" signature="b847f4dd531d0b41094fb4b38fd62bde0b0e29a5"]587 <150>1 2019-11-15T11:42:22.833Z *.vercel.app now lambda - [lambda@54735 requestId="q8k4w-1573818142562-9adfb40ce9d4" statusCode="200" path="api/index.js" signature="0900101157dac2a2e555524c2f8d61229b15307d"] BOMSTART RequestId: ec00309f-4514-4128-8b8a-9a0e74900283 Version: $LATEST
2019-11-15T11:42:23.176Z\\tec00309f-4514-4128-8b8a-9a0e74900283\\tINFO\\thello
END RequestId: ec00309f-4514-4128-8b8a-9a0e74900283
REPORT RequestId: ec00309f-4514-4128-8b8a-9a0e74900283\\tDuration: 20.08 ms\\tBilled Duration: 100 ms Memory Size: 1024 MB\\tMax Memory Used: 77 MB\\tInit Duration: 157.97 ms
 

Similar to JSON and NDJSON drains, a syslog message contains a hash signature for verifying messages on the signature key of structured data. On syslog drains, the signature is computed using an OAuth2 secret and the MSG section of the syslog format.

All drains support transport-level encryption using HTTPS or TLS protocols, and it is recommended to use them on production and use others only for development and testing.

When your server starts receiving payloads, it could be a third party sending log messages to your server if they know the URL. Therefore, it is recommended to use HTTP Basic Authentication, or verify messages are sent from Vercel using an OAuth2 secret and hash signature.

For example, if you have a basic HTTP server subscribing to Log Drains, the payload can be validated like so:

Next.js (/app)
Next.js (/pages)
Other frameworks
server.ts
import crypto from 'crypto';
 
export async function GET(request: Request) {
  const { INTEGRATION_SECRET } = process.env;
 
  if (typeof INTEGRATION_SECRET != 'string') {
    throw new Error('No integration secret found');
  }
 
  const rawBody = await request.text();
  const rawBodyBuffer = Buffer.from(rawBody, 'utf-8');
  const bodySignature = sha1(rawBodyBuffer, INTEGRATION_SECRET);
 
  if (bodySignature !== request.headers.get('x-vercel-signature')) {
    return Response.json({
      code: 'invalid_signature',
      error: "signature didn't match",
    });
  }
 
  console.log(rawBody);
 
  response.status(200).end();
}
 
async function sha1(data: Buffer, secret: string): string {
  return crypto.createHmac('sha1', secret).update(data).digest('hex');
}

You can compute the signature using an HMAC hexdigest from the secret token of the OAuth2 app and request body, then compare it with the value of the x-vercel-signature header to validate the payload.

In order to configure the logs you want to receive, you can provide one or more sources when creating a log drain:

valueDetails
staticRequests to static assets like HTML and CSS files
lambda Output from Vercel Functions like API Routes
edge Output from Edge Functions like Middleware
build Output from the Build Step
external External rewrites to a different domain

Example:

setting-sources
{
  "sources": ["static", "lambda", "edge"]
}

While this parameter is optional, providing at least one log source is highly recommended. If you do not provide any log sources, the log drain will default to edge, lambda, static, and external.

To configure which environments you want to receive logs from, you can pass one or more values to the environments property when creating a log drain:

valueDetails
productionLogs from production deployments with assigned domain(s)
previewLogs from deployments accessed through the generated deployment URL

Example:

setting-environments
{
  "environments": ["production", "preview"]
}

If you want to reduce the number of logs you receive, you can provide a samplingRate when creating a log drain. This value is a number between 0.01 and 1 that represents the percentage of logs you want to receive.

Example:

setting-sampling-rate
{
  "samplingRate": 0.5 // 50% of all log lines
}
Last updated on April 27, 2024