The Pentest Finding that wasn’t in the Report

By Nomad Security

·

Iceberg metaphor for penetration testing: above the surface labeled 'what's in the report,' below the surface labeled 'what we found,' listing business logic flaws, chained exploits, privilege escalation, internal risks, and attack paths. Headline: Not everything critical makes it into the report. Nomad Security branding.

Three days into a quarterly pentest at a 600-person SaaS company, I had full access to their production database. I never ran a single exploit against their firewall. I ran subfinder, httpx, and a GitHub search.

The finding that mattered most from that engagement was not in the report I delivered. It went into a phone call to the CISO, made over the head of the security director who hired me. This post is about why.

How a pentester actually finds these doors

If your mental model of a network pentest is someone pounding on your edge firewall, update it. The edge is fine. The edge has been fine for a decade. The interesting attack surface is the set of public URLs your engineers spun up this week that nobody has inventoried yet.

Here is the recon workflow, sketched at the level of strategy rather than tutorial. A director of security should recognize the shape of it, not need to copy-paste commands.

  • Subdomain enumeration against the apex domain, then resolution and HTTP fingerprinting. The interesting hits are not on your own infrastructure. They resolve to vercel.app, ngrok.io, trycloudflare.com, serveo.net, pinggy.io, loca.lt, or whatever this quarter’s tunneling product happens to be.
  • Certificate transparency log monitoring for the same brand strings. CT logs are public, queryable, and updated in close to real time. ngrok’s own threat-model documentation calls this out as a known discovery vector.
  • GitHub code search across public repos and forks for the company’s apex domain. The hits live in .env.example files that are not actually examples, in YAML configs committed by accident, in personal forks where the original author rotated their secret but the fork still carries the old one.
  • DNS-history lookups for subdomains that briefly pointed at preview infrastructure during a deploy window and then got pulled back, but where the preview environment is still up and still answering.
  • Lurking in the company’s developer-community Slack or Discord for help-channel posts that quote staging URLs verbatim while asking why the OAuth callback is broken.

Nothing in that list requires a CVE. Nothing in that list trips a WAF. Most of it does not generate a log entry on any device the security team owns.

The exploit chain

The chain that prompted this post, sanitized:

  1. subfinder piped through httpx -title surfaced two subdomains resolving to vercel.app previews. Both had page titles that read like internal admin tools.
  2. A GitHub code search for the apex domain across public forks surfaced an .env.example file in a personal fork of the company’s main repo. The values were not placeholders. They were live-looking strings pointing at the same Vercel preview.
  3. The /admin route on that preview returned a working UI, gated by a DEV_BYPASS_TOKEN header. The token value was hardcoded in the repo with a comment that read, almost verbatim, // TODO: replace before prod, do not commit.
  4. The preview environment’s database connection string pointed at a production-replica database the platform team had stood up so the preview would have realistic data to test against. The replica refreshed nightly from production.

Total elapsed time from subfinder to a query against the production-replica users table: about forty minutes. Three commands of consequence. Nothing in the firewall logs. Nothing in the WAF dashboard. The only artifact on the customer side was a Vercel access log that nobody on the security team had a feed into.

If your detection strategy assumes the attacker comes through your front door, you are not detecting the attackers who come through a tunnel your engineer opened thirty seconds ago.

The counter-intuitive bit

Here is the part I called the CISO about, and the part the rest of this post is really about.

ngrok is not the problem. Vercel preview deploys are not the problem. Cloudflare Tunnel is not the problem. Tailscale Funnel is not the problem. These are excellent tools that solve a real engineering problem, which is: a developer needs to share a work-in-progress URL with a designer, a PM, a customer success rep, or a prospect, for fifteen minutes, right now.

The problem at this customer (and at every customer where I have found this pattern) was that the security team’s deploy-approval process took two weeks per change. ngrok http 3000 takes thirty seconds. The developers were not being malicious. They were not even being careless in the way that word usually means. They were responding rationally to the incentive structure the security organisation had built. Anyone who has watched dev-platform velocity outpace security-review cadence over the last five years has seen this shape; the gap is not closing.

Banning the tool at the network layer fails the day the developer finds the next one. There is always a next one. The list above (serveo, pinggy, loca.lt) is just this quarter’s. Next quarter the list will be different. You will not win this race by blocklist.

The structural fix is a paved road

The fix is a sanctioned way to share a work-in-progress URL that is faster than ngrok, requires SSO, auto-expires, logs every request, and lives behind your egress controls. If the paved road is faster than the shadow path, developers use the paved road. If it is slower, they do not, regardless of policy.

That is not a finding the security director who hired me could triage. The director’s available move was to put “dev tunnel exposure” in the formal report, where it would land in next quarter’s planning cycle and get prioritised against fifty other mediums. The actual fix required peer-level conversation between the CISO and the VP of Platform Engineering, with the security team in the supporting role rather than the gatekeeping one. The structural change is moving from “security signs every deploy” to “platform owns a paved preview-share path that security audits quarterly.”

The security director did not have the standing to mandate that move. The CISO did.

Why I went around the person who hired me

Pentest engagement etiquette says findings flow back through the person who scoped the engagement. That is a good default. I broke it on this engagement, deliberately, because the timeline mattered.

If I waited for the report cycle, the structural conversation happened in eight weeks, if it happened at all. By calling the CISO the morning the pattern became clear, the conversation with platform engineering happened by end-of-week. The paved road shipped six weeks later. None of that happens on the timeline of a written finding.

I told the security director I was making the call before I made it. They were not happy about it in the moment, and I would not have done it without their knowledge. But the right move for the customer was the call, not the report entry, and a pentest vendor who will not occasionally make the right move at the cost of the relationship is a vendor optimising for the next renewal rather than for the customer’s actual security.

The reason I am writing this for directors of security specifically: if your pentest vendor ever wants to escalate over your head, the question to ask is not “why are you going around me” but “is the finding genuinely peer-level, and do I want my CISO to hear it from a third party who has no skin in our internal politics, or from me with the third party’s findings as backup?” The second answer is almost always better for you.

What we’d do this week

  1. Run subfinder against your own apex domain, pipe it through httpx -title, and grep the output for vercel.app, ngrok.io, trycloudflare.com, serveo.net, pinggy.io, loca.lt, and any other tunneling-vendor strings you can think of. If you have more than fifty engineers, you will find at least one. Do this Monday morning.
  2. Run a GitHub code search for your apex domain across public repos and forks. Look for .env files, YAML configs, and Terraform state. Rotate any credential you find before lunch. Do not wait for the meeting.
  3. Pull the access logs (if any exist) for whichever previewing or tunneling product you can confirm is in use. You are looking for external IPs that are not your engineers’ home connections.
  4. Book a thirty-minute conversation with your platform or developer-experience lead and ask exactly one question: what is the sanctioned way for an engineer to share a work-in-progress URL with a designer for fifteen minutes? If the answer is “they file a ticket,” you have just identified the cause of every dev-tunnel finding in your next pentest report.
  5. Take the answer to that conversation to your CISO, framed as a paved-road proposal, not a policy violation. The pitch is faster, more observable, more compliant. Not slower-but-safer.

If you have read this far and you are wondering whether your last pentest would have caught this, the honest answer is: probably not, unless the scope explicitly included open-source recon and the tester knew where to look. Most network pentests do not. They test the perimeter you asked them to test. The doors your engineers opened this morning are somebody else’s scope, until they are not.

This is general guidance, not legal or audit advice. Talk to your QSA or auditor for your situation.

Nomad Security

From the editors

Need help applying this to your environment?

Nomad Security helps engineering and security teams find and fix the issues attackers actually exploit. Penetration testing, vCISO advisory, secure code review, and threat research, sized to mid-market budgets.