Why TLS scanning isn't enough: the cryptographic surfaces most tools miss
Samuel Tseitkin 06 April 2026
A security team recently shared something that should give every CISO pause. They had done the work. They'd deployed X25519MLKEM768, the post-quantum hybrid key exchange group recommended by both NIST and IETF, across their public-facing infrastructure. Their TLS scanner confirmed it. In the language of most PQC readiness assessments, they were ahead of the curve.
Then they ran a full cryptographic discovery scan. Their certificates were still RSA-2048. Their email signing keys were classical ECDSA. Their SSH host keys were RSA. Their JWT fleet was signing with RS256. Their cloud KMS keys hadn't been rotated to any post-quantum algorithm. ML-DSA, the signature standard that complements ML-KEM, was nowhere in the environment. The TLS handshake was quantum-safe. Everything behind it was not. This isn't a story about negligence. It's a story about what "scanning" typically measures, and what it doesn't.
What TLS key exchange actually covers
To be clear: deploying X25519MLKEM768 is genuinely good practice. The hybrid construction means that even if a future quantum computer can break the classical component, the ML-KEM layer keeps the session secret protected. For forward secrecy, ensuring that recorded traffic today can't be decrypted when quantum computers arrive, this is exactly the right move.
But TLS key exchange protects one thing: the confidentiality of data in transit during a session. It says nothing about the identity guarantees backing that session, the credentials used to authenticate, the keys stored in your infrastructure, or the algorithms your applications use to sign and verify everything outside of that handshake.
Certificates: identity, not just transport
The certificate presented during a TLS handshake is what proves to the client that it's talking to the right server. If that certificate is signed with RSA or ECDSA, its identity guarantee is quantum-vulnerable, regardless of whether the key exchange itself used ML-KEM.
Harvest Now, Decrypt Later attacks on certificate chains are a known concern in the threat literature. An adversary recording TLS sessions today and storing the certificates doesn't need to break the key exchange; they need to forge or undermine the identity layer. A complete cryptographic inventory covers every certificate across every service, including HTTPS, SMTPS, LDAPS, internal APIs, with algorithm type, key size, and expiry all assessed against post-quantum readiness criteria.
Email: the forgotten signing surface
Email infrastructure carries its own cryptographic footprint that almost no TLS scanner touches. DKIM signing keys authenticate outbound mail to receiving servers. S/MIME certificates sign the content of messages. STARTTLS governs whether the transport is encrypted at all, and with what.
These surfaces are rarely audited at the cryptographic level, and they're rarely part of a PQC transition plan. A domain whose outbound mail is signed with RSA-1024 DKIM keys, which is common among organisations that set up email signing years ago and haven't revisited it, has a meaningful quantum exposure that no TLS migration will remediate.
SSH host keys: the administrative attack surface
Every SSH server in your estate advertises a set of host keys that clients use to verify its identity. If those keys are RSA - which they are in most environments built before the post-quantum transition - then the administrative access path to your entire server infrastructure is backed by a quantum-vulnerable identity guarantee.
SSH is particularly easy to overlook because it's not customer-facing. But it's also where privileged administrative access lives. A quantum-capable adversary who can undermine SSH host key verification has a path to every server in scope.
JWT fleets: signing at scale
JWTs are the authentication mechanism for most modern APIs. The signing algorithm committed in each token header, such as RS256, ES256, HS256, is a cryptographic choice made at the time the token is issued, propagated across every relying party that validates it. Most JWT auditing is done token by token, in point-in-time spot checks.
A fleet-level view is different. It shows algorithm distribution across all active tokens, identifies outliers using deprecated signing methods, and gives a population-wide compliance picture rather than a sample. For organisations issuing tokens at scale, the gap between "we updated our default signing algorithm" and "every active token in our fleet uses the new algorithm" can be surprisingly large.
Cloud KMS: key metadata without key material
Cloud key management services, such as AWS KMS, Azure Key Vault, and GCP Cloud KMS, hold the keys that protect data at rest across cloud infrastructure. The algorithm those keys use is a property of their configuration, not their material, and it's readable without accessing any secrets. Some services are now offering PQC, including ML-DSA, but many users still users older, classical versions.
Most organisations don't have a systematic view of the algorithm posture of their cloud keys. KMS interfaces surface key IDs, rotation status, and policies; they don't present an aggregated view of how many keys use RSA versus EC versus HMAC, or which are already using post-quantum algorithms. A cryptographic inventory that covers cloud KMS surfaces this without touching any key material.
Source code: where classical algorithms are committed
Cryptographic algorithms show up in codebases as library imports, configuration constants, and sometimes as hardcoded values. A source code scan surfaces where classical algorithms are used across the codebase. Not as a live assessment of what's deployed, but as an inventory of what engineering teams will need to migrate.
This matters for planning. Knowing that a particular algorithm appears in 47 files across 6 repositories is a different kind of finding than knowing a TLS endpoint is using it. It's the difference between a runtime observation and a development lifecycle task.
Devices and OT: the surface with the least tooling
Operational technology, IoT devices, and air-gapped systems typically sit outside the perimeter that most discovery tools cover. They may be running cryptographic libraries that haven't been updated in years, using algorithms with no upgrade path, or operating in environments where the deployment model for a standard SaaS scanner simply doesn't apply.
On-premise deployment capability matters here. A scanner that can be deployed within the environment, without requiring data to leave the customer boundary, is the only way to get visibility into this surface without introducing new exposure.
Coverage, not a single data point
The team who deployed X25519MLKEM768 made a good decision. The problem wasn't what they did; it was the assumption that it was sufficient. PQC readiness isn't a single configuration change; it's a property of an entire cryptographic estate, assessed across every surface where algorithms make identity and confidentiality guarantees.
A TLS scan tells you about one layer of one surface. A complete cryptographic inventory tells you where you actually stand. Tools like CipherScout are built to cover this full surface area, including TLS, certificates, email, SSH, JWT fleets, cloud KMS, source code, OT, IoT and devices, and more, and produce a CycloneDX 1.7 CBOM that gives both security teams and compliance functions an auditable record of what was found. The goal isn't to alarm; it's to give organisations an accurate picture so they can prioritise the transition work that actually matters.
If you've deployed post-quantum key exchange and want to understand what's still exposed underneath it, that's exactly what a discovery scan is for. Book a CipherScout discovery scan →
Contact us
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.