Humans Only Humans Only
Humans Only Humans Only

Protect against bots: a practical plan for product owners and developers

Published on 2026-02-19

Protect high-value actions with risk-based decisions, smart rate limiting, and measurable outcomes—without derailing your UX.

Protect against bots: a practical plan for product owners and developers visual #1

Protect against bots by protecting actions (not pages)

“Protect against bots” often gets treated like a perimeter problem: shield the whole site and call it done. In practice, bots don’t care about your homepage copy. They care about high-value actions: signup, login, password reset, checkout, and your APIs.

OWASP frames this neatly as automated threats to web applications—automation abusing normal functionality rather than a single exploitable bug (OWASP Automated Threats). That’s good news: you can focus your effort where it actually pays off.

The goal isn’t “no bots”. It’s “no bot abuse”.

Some bots are useful: search crawlers, uptime monitoring, partner integrations. The win is blocking malicious automation while letting legitimate automation and real people through.

So don’t aim for a dramatic “we stopped 100% of bots” slide. Aim for measurable outcomes: fewer fake accounts, fewer takeovers, fewer scraped pages, fewer costly API calls.

A model both product and dev can run: Detect → Decide → Respond

Most successful bot defences boil down to a simple system:

  1. Detect signals (velocity, browser integrity, network reputation, behaviour patterns).
  2. Decide risk per request (score or category).
  3. Respond in a way your product can live with.

This is the same pattern you see in risk-based approaches across the industry (for example, scoring models that require a server-side decision). The point isn’t the vendor—it’s the architecture.

The three outcomes that stop endless debates

If your team only agrees on one policy, make it this:

  1. Allow: low-risk traffic gets the normal UX.
  2. Step-up: medium-risk traffic gets extra verification or stricter checks.
  3. Block / throttle: high-risk traffic is denied or slowed down.

This gives product owners control over friction, and gives developers something deterministic to implement and debug.

What to protect first (with concrete examples)

Bots concentrate where value concentrates. Start with one or two endpoints, ship, measure, then expand.

Login + password reset

Credential stuffing and brute force attempts love predictable auth endpoints. NIST’s Digital Identity Guidelines are blunt here: the verifier shall implement rate limiting to effectively limit failed authentication attempts (NIST SP 800-63B).

Practical starting point:

  1. Rate limit failed logins per account + IP + device (not just IP).
  2. Step-up on suspicious patterns (bursts, unusual geos, automation fingerprints).
  3. Apply different policies to /login vs /password-reset (they are not the same risk).

Signup

Fake accounts drive referral abuse, free-trial farming, and downstream support mess.

Practical starting point:

  1. Put a risk gate in front of POST /signup.
  2. Allow most attempts.
  3. Step-up when signals stack (data-centre IP + high velocity + repeated patterns).
  4. Block/throttle repeated obvious automation.

APIs

APIs are how bots skip your UI entirely.

Practical starting point:

  1. Authenticate properly and scope tokens tightly.
  2. Rate limit per token/user.
  3. Separate policies for public APIs vs partner APIs.

Rate limiting: the boring hero (do it properly)

Rate limiting turns “infinite attempts” into “finite cost”. When you throttle, use clear semantics: HTTP 429 Too Many Requests literally means the client sent too many requests in a given timeframe (MDN on 429).

A few rules of thumb that work in production:

  1. Rate limit per endpoint (signup ≠ search ≠ checkout).
  2. Escalate gradually (soft throttle → harder throttle → temporary block).
  3. Add jitter so synchronised bot bursts don’t stay synchronised.

A quick rollout plan (that won’t eat your quarter)

  1. Pick one high-value endpoint (often /login or /signup).
  2. Run monitor mode briefly to establish a baseline.
  3. Ship Allow / Step-up / Block with conservative thresholds.
  4. Add or tune rate limiting for auth and high-velocity actions.
  5. Review weekly using real metrics: conversion, step-up rate, pass rate, abuse rate.

This approach keeps scope sane and makes progress visible.

Where Humans Only fits

Humans Only helps you protect against bots with fast, privacy-first verification (typically under 2 seconds), zero tracking, easy drop-in integration, and real-time analytics.

If you want to stop automated abuse without turning your UX into a security side quest, Humans Only is built for it: Stop Bots, Welcome Humans.

We use cookies to improve your experience and anonymously analyze usage.