Customer Statement
Meta Incident 20th January 2026
Issue description and impact
Meta introduced breaking changes to the /media_id endpoint without backward compatibility. As a result, some valid requests were incorrectly classified as bad requests and triggered rate limiting logic on our internal service (which we call Unify) that is responsible for sending requests to Meta WhatsApp Business API. In parallel, unexpected response handling caused Unify service restarts. This led to temporary service instability and request blocking for affected clients using the /media_id endpoint.
Cause of the issue
Two independent changes on Meta’s side caused the incident.
First, Meta’s API started returning a Bad Request response when an optional phone number ID was included, even though the request was valid according to documentation. This behavior caused Unify’s rate limiter to block clients sending repeated requests.
Second, Meta changed the response delivery method for the /media_id endpoint from a fixed Content-Length payload to Transfer-Encoding: chunked. Unify’s handler did not anticipate this encoding, resulting in unhandled runtime errors and container restarts.
Mitigation steps
An initial hotfix was deployed to stabilize Unify by correctly handling chunked responses for the /media_id endpoint.
A subsequent release removed the phone number ID from proxied requests to Meta as a temporary workaround, preventing valid requests from being rejected and clients from being rate limited. Blocked clients were cleared once the mitigations were in place.
Additional monitoring was added to detect and alert on additional failures.
Prevention of recurrence & suggestions
Meta has acknowledged the issues on their side:
Between 20 Jan 2026, 4:07 AM and 11:25 AM PST, some customers experienced errors when attempting to retrieve media using the WhatsApp Business API. The error message received was:
Invalid OAuth access token - Cannot parse access token."
This resulted in failed requests to the Get Media API endpoint, preventing customers from accessing media content as expected.
The impact was limited to the inability to retrieve media files, which may have disrupted business workflows relying on media retrieval.
The incident was triggered by a change in the media retrieval requests. This change inadvertently caused the Get Media API to reject valid OAuth access tokens, resulting in authentication errors for customers. The expected behavior was for the API to accept valid tokens and allow media retrieval.
The issue was not recurring and was directly linked to the recent change. Once the change was reverted, the errors ceased.
To prevent such issues from happening again, additional automated testing and safeguards will be implemented to ensure similar changes do not impact production traffic in the future. Besides that, enhanced monitoring and alerting for authentication errors on the Get Media API will be established to enable faster detection and mitigation.
—
Internally, we are reviewing Unify endpoints to ensure full HTTP RFC compliance.
Timeline
Incident timeline (UTC)
12:20 – First automatic alert triggered for elevated 5xx error rate
12:34 – 360D messaging engineering team began investigating
12:47 – Incident reported on our public status page
13:21 – Messaging engineering team applied a hotfix to prevent Unify service crashes, service stability restored but root cause still unknown
13:59 – Our L2 team indicates that clients are being blocked on the /media_id endpoint
14:00 – Messaging engineering team started investigating rate-limited clients
14:30 – Script prepared to clear clients blocked due to artificially induced rate limits on the /media_id endpoint
14:34 – Messaging engineering team identified unexpected behavior in Meta’s /media_id endpoint when the optional phone number ID was provided
14:58 – Previously blocked clients cleared again
15:03 – Case escalated to our Technical Account Manager (TAM) at Meta
15:28 – Messaging engineering team identified the full root cause
15:36 – Workaround deployed for the /media_id endpoint and blocked clients cleared again, preventing further impact
15:46 – Our TAM acknowledges the issue and indicates it is affecting multiple BSPs
16:21 – Incident marked as resolved on our public status page
19:25 – Meta applied fix and the situation stabalized allowing us to remove the workaround solution.