Humans Only Humans Only
Humans Only Humans Only

How to block automated attacks (without slowing your product)

Published on 2026-02-19

A practical, risk-based playbook for product owners and developers: defend high-value endpoints, keep UX smooth, and make abuse expensive.

How to block automated attacks (without slowing your product) visual #1

What “automated attacks” actually look like in a product

“Automated attacks” sounds abstract until you see it in your dashboards: a sudden surge of login attempts, a flood of sign-ups that never activate, or an API bill that jumps overnight.

OWASP frames these as automated threats to web applications—abuse of normal functionality at machine speed (credential stuffing, credential cracking, scraping, account creation abuse) rather than one magic exploit (OWASP Automated Threats). That framing helps because it turns “block bots” into a very practical question: which actions do we need to defend, and how do we respond when risk rises?

Primary keywords: block automated attacks, bot prevention

Secondary keywords: rate limiting, credential stuffing, website bot protection, human verification

Start with the endpoints attackers monetise

Bots don’t browse. They transact.

If you’re trying to block automated attacks quickly, focus on:

  1. Login (POST /login): credential stuffing and credential cracking
  2. Signup (POST /signup): fake accounts, trial abuse, referral farms
  3. Password reset: takeover attempts and email/SMS cost spikes
  4. Checkout / claims: card testing, reward abuse
  5. APIs: high-volume scraping and automation that bypasses your UI

A clean way to label what you’re seeing is the OWASP Automated Threat Handbook taxonomy (for example, credential stuffing and scraping each have distinct patterns and countermeasures) (OWASP OAT-008 Credential Stuffing, OWASP OAT-011 Scraping).

The model that works in production: Detect → Decide → Respond

If you want to block automated attacks without breaking your funnel, you need a system you can tune—not a single “bot switch”.

A pattern you’ll see across modern website bot protection products is:

  1. Detect signals (velocity, browser integrity, network reputation, behaviour)
  2. Decide risk (score or category)
  3. Respond with an outcome your product can live with

Score-based approaches like reCAPTCHA v3 made the “Decide” step mainstream by returning a score you enforce server-side (reCAPTCHA v3 docs). Whether you use a score or rules, the core idea is the same: your backend should make the final decision.

Ship one policy your whole team can understand: Allow / Step-up / Block

This is the simplest policy that still holds up under pressure:

  1. Allow (low risk): normal flow, no friction.
  2. Step-up (medium risk): add human verification or an extra check only when signals stack up.
  3. Block / throttle (high risk): stop the attempt, or slow it down hard.

Product owners like this because it’s a clear UX trade-off. Developers like it because it’s observable and debuggable.

Controls that actually block automated attacks (without a rewrite)

Rate limiting is non-negotiable for auth

NIST’s Digital Identity Guidelines explicitly require rate limiting (throttling) to effectively limit failed authentication attempts in many contexts (NIST SP 800-63B).

In practice:

  1. Rate limit per IP (good first layer, not sufficient alone)
  2. Rate limit per account identifier (email/username) to slow targeted attacks
  3. Rate limit per session/device where you can (better signal than IP alone)

When you do throttle, return HTTP 429 Too Many Requests (standardised in RFC 6585) and include a Retry-After header so legitimate clients can recover cleanly (RFC 6585).

Step-up verification: place it where it pays for itself

Don’t challenge every request. Put step-up verification on actions that can be abused for money, access, or scale.

Good triggers include:

  1. Bursty attempts on high-value endpoints
  2. Repeat failures (e.g. multiple failed logins)
  3. High-risk networks (known datacentre ranges, anonymisers)
  4. Automation fingerprints you can reliably detect

Treat scraping as its own problem

Scraping often looks different from account attacks: lots of GETs, predictable paths, and systematic crawling.

A practical playbook:

  1. Per-route throttles (listing/search endpoints are common hotspots)
  2. Caching for “expensive” pages to reduce the cost of bot traffic
  3. Soft walls for high-value data (authentication, step-up, or stricter limits)

A concrete rollout example: blocking automated attacks on login

Say you’re seeing 10× more login attempts, but successful sign-ins haven’t increased. That’s a classic credential stuffing smell.

Rollout that works for both PMs and devs:

  1. Add monitoring on POST /login (attempts, failures, unique accounts targeted).
  2. Implement rate limiting for failed attempts (per IP + per account identifier).
  3. Introduce Allow / Step-up / Block outcomes.
  4. Step-up when attempts show automation (high velocity + repeated failures + suspicious networks).
  5. Measure weekly: successful logins, support tickets, step-up rate, and confirmed ATOs.

You’ve “blocked automated attacks” when attackers can’t scale, and your real users still get through quickly.

What to measure (so you don’t optimise for vanity metrics)

“Requests blocked” is not the goal. You want to see impact without collateral damage.

Track:

  1. Abuse rate: successful bad actions / total attempts
  2. Step-up rate and pass rate
  3. Time-to-complete on protected flows
  4. Conversion at signup/login/checkout
  5. Infrastructure cost (CPU, egress, third-party API spend)

Where Humans Only fits

Humans Only is bot prevention built for product owners and developers who need to block automated attacks without turning core flows into a chore.

It’s fast (typically under 2 seconds), privacy-first (zero tracking), easy to drop in, and includes real-time analytics so you can tune policies with confidence.

Bottom line

To block automated attacks, protect the endpoints attackers monetise, enforce rate limiting like you mean it, and use a simple Allow / Step-up / Block policy you can actually run.

If you want bot prevention that feels smooth for real people and brutally un-fun for automation, Humans Only is built for exactly that: Stop Bots, Welcome Humans.

We use cookies to improve your experience and anonymously analyze usage.