Back to all posts
EngineeringJuly 18, 20255 min read

The Bugs That Kept Coming Back

I was hired to build frontends. Then BugCrowd reports started landing in my inbox. What I learned about security when patching stopped working.

web-securityowaspbugcrowdxsssecurity-architecture

I wasn't hired to do security.

I was a frontend engineer at WalletHub — building pages, writing Angular components, making things look right. Security was someone else's problem.

Then the BugCrowd reports started arriving.

BugCrowd vulnerability reports — XSS, injection, and broken auth alerts flooding in weekly


The Flood

WalletHub ran an active bug bounty program. Researchers probed the platform constantly — testing inputs, fuzzing endpoints, chaining vulnerabilities that no one inside had thought to look for.

The reports were relentless. XSS. Injection. Broken auth. Session issues. Every week, new findings. Every sprint, new patches.

At first, I treated them like bugs. Someone reports it, I fix it, I move on.

But the same types of vulnerabilities kept returning.

Not the exact same bug — the same class of bug. We'd fix an XSS in one form, and a researcher would find another in a different form. We'd sanitize one input, and another unsanitized input would surface three weeks later.

Fixing felt like bailing water from a boat with holes in it.

Patching the same XSS and injection vulnerabilities every sprint — like bailing water from a leaking boat


The Wake-Up Call

Early on, a WordPress vulnerability hit us — one that affected virtually every WordPress installation at the time. It involved Adobe Flash and a SWF file that could be exploited for cross-site scripting.

Every site running WordPress had it. Including ours.

That was the moment security stopped being abstract. This wasn't a researcher poking at edge cases. This was a known exploit, in the wild, affecting millions of sites simultaneously.

We patched it. Everyone patched it. But the lesson stuck: vulnerabilities don't wait for you to be ready.


Why Patches Didn't Stick

After months of BugCrowd reports, I started seeing the pattern.

The bugs weren't random. They were symptoms of the same root cause: the architecture invited abuse.

The stack was PHP-heavy. And PHP — especially the frameworks and patterns common at the time — made it dangerously easy to build insecure systems without realizing it.

Input would flow from the browser through the backend to the database with minimal transformation. Template rendering would output user-supplied content without consistent escaping. Endpoints exposed more than the frontend needed. Trust boundaries were blurry or nonexistent.

Every individual fix was correct. But we were patching symptoms, not treating the disease.

The architecture itself produced vulnerabilities faster than we could close them.


What Actually Worked

I stopped fixing bugs one by one and started asking: how do we make entire classes of bugs impossible?

Validate input, encode output. Every input gets validated the moment it enters the system. Every output gets encoded for its context — HTML, SQL, JSON. Not "somewhere in the middleware." At every boundary.

Deny by default. APIs expose nothing unless explicitly allowed. Endpoints return the minimum data needed. Permissions are restrictive first, opened only with justification.

Content Security Policy. Browser-level headers that block entire categories of attacks. A strict CSP makes most XSS vectors impossible before your code even runs.

Automated scanning with OWASP ZAP. I wrote custom scan rules inside ZAP targeting our app's specific patterns — the input flows, the rendering paths, the endpoint shapes that kept producing vulnerabilities. Instead of waiting for researchers to find bugs, we found them ourselves.

None of these were revolutionary. They were all in the OWASP playbook from day one. The problem was never knowledge — it was architecture that made the wrong thing easy and the right thing hard.

Loading diagram...

Web security architecture shift — from trust-by-default to deny-by-default with validation at every boundary

The Frustrating Truth

The hardest part wasn't building the scanner or writing CSP headers.

It was accepting that most security problems aren't bugs — they're design decisions.

When your architecture trusts user input by default, every feature you ship is a potential vulnerability. When your framework makes escaping opt-in instead of opt-out, every template is a risk.

You can't patch your way out of a design problem.

You have to change the design.


What It Taught Me

I carried this lesson out of WalletHub and into every role since. Security became a thread in everything that followed.

I think about abuse cases before I think about features. Before building an input form, I ask: what happens if someone puts a script tag in here? What happens if they replay this request 10,000 times? What happens if they modify the payload?

I design for hostile input. Every system boundary assumes the worst. User input, API responses, webhook payloads — nothing is trusted until validated.

I don't trust abstractions to be safe. Frameworks claim to handle escaping, sanitization, CSRF protection. I verify. Because the one time a framework doesn't handle it is the one time a researcher finds it.

This isn't paranoia. It's pattern recognition.

After watching the same classes of bugs return month after month, you stop believing in fixes. You start believing in systems that make the wrong thing hard.


If You're Patching the Same Bugs

Stop patching. Start asking:

Why does this architecture keep producing this type of bug?

The answer is usually one of three things:

  • Trust boundaries are in the wrong place
  • The default behavior is insecure
  • Security is opt-in instead of opt-out

Fix those, and the BugCrowd reports slow down.

Not because researchers stop looking. Because there's less to find.