The unauthenticated postback hole I had in production for months
Trcker's postback endpoint accepted unauthenticated conversions for months. Sentry logged the warnings. Nothing blocked. Here is what happened, why it took a security audit to catch it, and what changed.
I run an affiliate tracking platform. Brands accept conversions from advertisers via a generic postback URL: /api/postback/[brandSlug]/[offerSlug]. The endpoint is supposed to require a shared secret on every conversion, either as a header or query parameter, and reject anything without one.
That was the design. For months, that was not what shipped.
What was actually in production
The intended auth model is the same one Impact, Everflow, Awin, and ShareASale use: a shared secret in the URL. It is not HMAC of the payload. Postback URLs get pasted into advertiser admin panels (Shopify, native ad networks, server-side trackers) where computing a fresh signature per request is not practical. Shared-token in the URL is the industry standard.
The secret lives in offers.postback_secret. The handler accepts the request if the secret matches via constant-time comparison (header preferred, query param as fallback). Without a matching secret, it should reject.
Should. The original schema added the column as nullable. The rollout plan was reasonable on paper: newly created offers auto-generate a secret, existing offers stay NULL until the brand owner opts in. While the secret is null, the route runs in "legacy mode" with a deprecation warning to Sentry. Once a secret is set, enforcement is on.
Translated: if an offer was old enough to predate the secret feature, anyone who knew the brand slug and offer slug could fire conversions with no authentication. A scraped tracking URL was all you needed.
Postback handler with the bug: null-secret branch accepts unauthenticated requests
The impact
Three vectors, in order of severity:
- Forged conversions on legacy offers could drain a brand's partner payout budget.
- An affiliate could inflate their own commissions by firing fake conversions against their own affiliate ID.
- Reporting was polluted. The data brands looked at to make payout decisions was wrong.
Why I missed it
Three reasons, in order of severity.
1. "Legacy mode" sounded like a transition. It was a backdoor.
The plan was "auto-generate secrets for new offers, let old offers opt in." That sounds like a graceful migration. What it actually meant was "the route is unauthenticated for an unbounded subset of offers, with no deadline to fix it."
Soft migrations to security-critical features are not migrations. They are permanent holes with optimistic labels.
2. Sentry warnings were not alerting
The warning said "no secret, accepting unsigned." That language sounds like the code caught something. It did not. It noticed something and let it through. I had a Sentry dashboard. The warnings showed up there. I did not realize for months that the warning meant "we let this go," not "we blocked this."
Sentry is an observation tool. It surfaces what your code reported. It does not tell you whether your code did the right thing.
3. The tests covered the happy path
I had tests for the signed-postback path. The null-secret path was assumed to be a transitional state that legitimate offers would eventually leave. There were no tests asserting that the null path should reject. There were also no tests asserting it should accept, so the behavior was tested by nobody.
The fix
The Phase 1.6 fix kept the same auth pattern (shared token in URL/header) but removed the legacy permissive mode. The current handler returns one of three reasons when verification fails:
// src/lib/conversions/postback-signature.ts
export function verifyPostbackSecret(
expected: string | null,
headers: Headers,
searchParams: URLSearchParams,
): PostbackVerifyResult {
if (!expected) {
return { ok: false, reason: "unconfigured" };
}
const provided =
headers.get("x-postback-secret") ?? searchParams.get("secret") ?? "";
if (!provided) {
return { ok: false, reason: "missing" };
}
const expectedBuf = Buffer.from(expected, "utf8");
const providedBuf = Buffer.from(provided, "utf8");
if (expectedBuf.length !== providedBuf.length) {
return { ok: false, reason: "mismatch" };
}
if (!timingSafeEqual(expectedBuf, providedBuf)) {
return { ok: false, reason: "mismatch" };
}
return { ok: true };
}
Three reject reasons, distinct:
"unconfigured"— offer has no secret stored. Treated as a 401 now, was treated as accept-with-warning before."missing"— secret was not provided in the request."mismatch"— provided secret did not match.
Constant-time comparison via Node's timingSafeEqual on equal-length buffers. Length mismatch returns mismatch (also constant-time, since the length check happens before the comparison).
What I learned
- "Legacy mode" for security-critical paths is a hole, not a migration. Either the path is authenticated or it is not. There is no middle.
- Warning-only fraud monitoring is not fraud monitoring. If your security code logs without blocking, it is commentary, not enforcement.
- Test the failure path. Tests that cover the happy path tell you the happy path works. They do not tell you what happens when the assumption underlying the happy path is wrong.
- "Should never happen" needs schema or runtime enforcement, not a hope. The null state was tracked by nothing. The migration could have set a sunset date. It did not.
A security system that logs is a security system that has decided what to do after the fact. That is not the same as deciding.
Audit yourself
If you run any system with shared-token postbacks, webhooks, or callback URLs:
- Grep your handlers for "legacy mode," "compatibility," "transitional," or any phrase that means "auth is off for some subset right now."
- Check your schema for nullable secret columns. If null is permitted, define exactly what the handler does when it is null. "Logs a warning and proceeds" is a hole.
- Search Sentry for warnings that imply your code took a path it should not have. Anything that says "fallback," "skipped," "accepted without," is worth a read.
I would bet half of you find something. I did.