Humans Only Humans Only
Humans Only Humans Only

Protect your website from bots: a practical plan for product owners and developers

Published on 2026-02-19

A practical, risk-based approach: protect high-value actions, use allow/step-up/block, add rate limits, and measure what actually changes.

Protect your website from bots: a practical plan for product owners and developers visual #1

Protecting a website from bots: start with the actions that matter

“Protect website from bots” sounds like a site-wide problem. In reality, bots target a handful of high-value actions where money, data, or compute costs pile up: signup, login, password reset, checkout, and key APIs.

OWASP calls these patterns automated threats to web applications—automation abusing normal functionality rather than a single bug (OWASP Automated Threats). That framing is useful because it stops you wasting time defending pages no attacker cares about.

The goal isn’t “no bots”. It’s “no bot abuse”.

Some automation is helpful (search crawlers, uptime monitors, integrations). Even Cloudflare’s definition of bot management focuses on blocking unwanted or malicious bots while allowing useful bots (Cloudflare: bot management).

So your job is not to declare war on every script. Your job is to protect the flows that create value and make abuse expensive, noisy, and measurable.

A model product + dev teams can actually run: Detect → Decide → Respond

The best bot defences aren’t one clever trick. They’re a simple system:

  1. Detect signals (velocity, browser integrity, network reputation, behavioural patterns).
  2. Decide risk (score or category) per request.
  3. Respond with an outcome your product can live with.

Score-based tools made this model popular (for example, reCAPTCHA v3 returns a score and expects your backend to act on it) (Google reCAPTCHA v3 docs). Vendor aside, the architecture is the point.

The three outcomes that keep everyone sane

If you only implement one policy, make it this:

  1. Allow: low-risk traffic continues normally.
  2. Step-up: medium-risk traffic gets extra verification or stricter checks.
  3. Block / throttle: high-risk traffic is denied or slowed down.

This avoids “should we add a CAPTCHA everywhere?” debates, and it’s debuggable in production.

What to protect first (and what to do)

Bots don’t browse. They hammer endpoints.

Login and password reset (credential stuffing, brute force)

  1. Add rate limiting and lockout logic for repeated failures.
  2. Treat /login and /password-reset as separate surfaces with different thresholds.

NIST’s Digital Identity Guidelines explicitly require rate limiting to effectively limit failed authentication attempts (NIST SP 800-63B).

Signup (fake accounts, referral farms, trial abuse)

  1. Gate POST /signup with risk-based decisions.
  2. Step-up when signals stack up (data-centre IPs, bursty attempts, automation fingerprints).

Checkout, claims, promos (card testing, reward abuse)

  1. Rate limit by account + device + IP, not just IP.
  2. Add step-up verification on high-risk attempts, not every purchase.

APIs (bypass the UI entirely)

  1. Authenticate properly, scope tokens tightly, and rate limit per token/user.
  2. Consider separate policies for public vs partner APIs.

A quick, concrete example: stop fake signups without nuking conversion

Imagine your SaaS free trial is being farmed:

  1. Signups spike.
  2. Activation doesn’t.
  3. Your email costs rise.

A practical rollout:

  1. Put a risk gate in front of POST /signup.
  2. Allow most users.
  3. Step-up when high-risk signals combine.
  4. Block/throttle obvious repeat automation.
  5. Track step-up rate, pass rate, and successful fake signups per day.

Success looks like “fake accounts down, conversion stable”—not “we blocked 10 million requests”.

Rate limiting: the boring hero (use it properly)

Rate limiting is how you turn “infinite attempts” into “finite cost”. When you do throttle, use standard semantics—HTTP 429 Too Many Requests literally means the client sent too many requests in a given timeframe (MDN on 429).

A few practical tips:

  1. Rate limit per endpoint (login ≠ search ≠ checkout).
  2. Escalate penalties (soft throttle → harder throttle → temporary block).
  3. Prefer sliding windows and add jitter to make “synchronised bot bursts” less effective.

The mistakes that keep bot problems alive

Treating protection as one global switch

Bots concentrate where your value concentrates. Configure by endpoint and action.

Measuring “blocks” instead of outcomes

Track:

  1. Funnel conversion on protected steps
  2. Step-up rate and pass rate
  3. Time-to-complete
  4. Abuse rate (successful bad actions / total attempts)

No failure mode

Decide what happens when verification can’t run (timeouts, flaky networks, script blockers). Your system should degrade predictably.

Where Humans Only fits

Humans Only helps you protect your website from bots with fast, privacy-first verification (typically under 2 seconds), easy drop-in integration, and real-time analytics.

If you want a bot defence that product owners can tune and developers can ship without a six-week detour, Humans Only is built for it: Stop Bots, Welcome Humans.

We use cookies to improve your experience and anonymously analyze usage.