An AI search startup is being sued by one of the world’s biggest retailers. On paper it’s about shopping agents and user accounts. In practice, it goes to the heart of a question publishers can’t avoid: will the law let bots walk through your paywall as if they were people?
Amazon v Perplexity won’t decide every last detail in the AI and copyright saga. But it will say something important about access: who is allowed into a system, on what terms, and whether an AI agent can hide behind a human login and still claim to be innocent.
What Amazon says Perplexity did
Perplexity’s “Comet” is an AI browser. You tell it what you want, it logs into sites on your behalf, clicks around, compares options and buys things for you.
Amazon’s lawsuit says Comet didn’t behave like a browser at all. In Amazon’s telling, the sequence looks like this:
The rules were clear.
Amazon’s terms of use ban automated tools from accessing private account areas without permission. That covers bots, scrapers and similar software.Warnings were issued.
Amazon told Perplexity that Comet’s automated flows inside logged-in accounts were not allowed, and followed up with a cease-and-desist.A technical wall went up.
Amazon deployed a fingerprinting system to recognise Comet’s traffic and block it from signing into customer accounts.Comet tried another door.
Perplexity shipped an update so Comet no longer matched the fingerprint, and continued logging into accounts while presenting its traffic as if it were a normal Chrome user.
At that point, Amazon says, this stopped being “a clever client”. Once permission had been withdrawn and a technical barrier put in place, any attempt to evade that barrier and carry on inside customer accounts became unauthorised access.
Perplexity’s rebuttal is that it is simply acting for the user. If a human can log in and shop, their chosen software should be able to do the same. The credentials sit on the user’s device, Perplexity says; Amazon is dressing up anti-competitive behaviour as computer misuse.
Strip away the rhetoric and you get a clean question: when a subscriber invites an AI agent into their account, does the platform still have the right to say “no bots”, or does the user’s consent trump that?
Same disease, different symptom
This isn’t happening in a vacuum.
Bad bots now generate more traffic than humans. A large chunk of that traffic comes from big cloud providers — including the same ones that power the platforms complaining about bots. Security firms have spent the last few years tightening defences around data centres that are simultaneously hosting the attacks.
On the other side, Perplexity has been under fire from infrastructure companies, including Cloudflare, for using “stealth” crawlers: rotating IP addresses, pretending to be standard browsers, sometimes ignoring robots.txt, the traditional (if weak) way websites tell automated tools to stay out. Perplexity disputes parts of this picture, but the fact it is even a debate tells you everything about the incentives.
It’s also worth noting that Comet is reported by users (on Reddit) to be much more vulnerable to phishing and web attacks than traditional browsers like Chrome or Edge.
This is specifically attributed to the browser’s agentic AI functionality that Amazon complains about, which can autonomously browse and act on user behalf, increasing the risk of data exfiltration when malicious instructions are embedded in webpage content.
It’s the same problem publishers face every day in their logs: automated systems treating your infrastructure and your content as raw material, and arguing about the etiquette later.
“Scraping for me, not for thee”
The irony (and hypocrisy) is that whilst Big Tech firms and AI labs relentlessly scrape your site, they are also busy building fences around their own content and IP.
Several major labs restrict the use of their models’ outputs to train competing systems, even as their models are built on news articles, books, forums and everything else they could gather during the “wild west” phase of web scraping. Regulators in Europe are now asking whether Google has used publishers’ content to feed AI search features that answer questions directly while sending fewer clicks back.
If you work in media, the pattern feels familiar. Platforms praise openness when they are ingesting your work. When someone wants to ingest theirs, they rediscover the importance of ownership and permission.
No bueno.
How AI agents already slip past your paywall
Paywalls are for people. While lawyers argue about authorisation, AI agents are already doing something publishers assumed was impossible: reading paywalled articles through subscriber accounts and serving the substance to non-subscribers.
Unfortunately, the reality is many modern paywalls are cosmetic. The full article is delivered in the HTML, and a script adds the blur, dim or modal box that hides it from non-subscribers. A human sees “Subscribe to continue”. An AI browser sees the full text before the paywall code runs, or simply reads the DOM underneath the overlay.
Columbia Journalism Review’s report outlines how investigations have shown AI browsers retrieving and summarising long, subscriber-only features even when the site has blocked known AI training bots. The crucial detail is that the AI isn’t coming in as “GPTBot” or “crawler-123”. It is coming in as “this paying user’s browser”.
You know what that means in practice. You spend weeks on a story, it squeaks into profit on subscription revenue, and then you see a chatbot quoting it back to your readers without your name or your brand attached. And, of course, no payment.
The missing revenue hurts. The silent removal of your byline hurts more.
And this is where the legal distinction matters. The law here does not obsess over why someone reads a page. It focuses on whether they went into parts of the system they were clearly told were off-limits and tried to dodge technical barriers designed to keep them out.
Google is also in hot water with the European Commission for similar potential breaches. This problem is widespread, and these rulings will impact the future incentive structures and set of standards in publishing and journalism.
Two futures for publishers
Now put yourself, not Amazon, at the centre of the picture. What happens to publishers and the wider industry if this case leans one way or the other?
If Amazon’s view largely prevails:
Courts will have signalled that platforms can withdraw consent for bots, back that with technical blocks, and treat automated tools that disguise themselves to bypass those blocks as intruders, even when they sit behind genuine user logins.
That doesn’t turn every bot into a paying customer overnight. But it does make it easier for publishers to say: “If you identify as an AI agent and agree to our terms, you can come in. If you hide, you are trespassing.”
The practical result is a stronger hand when you want to insist that machine access, not just human access, is governed by rules and, eventually, price.
If Perplexity’s “just the user’s hands” argument wins:
The law will be closer to saying that once a user is allowed into their account, anything their software does inside those walls is presumptively authorised, even if the site has said “no bots” and tried to enforce it.
Your paywall will still decide which people can log in. It will be much harder to argue that an AI working through those people is an unauthorised presence. In effect, one subscriber and a capable agent could provide the substance of your archive to many non-subscribers.
In that world, your main tools shift back towards copyright and contract fights with AI firms over training data and output reuse. Those fights are slow, expensive, and currently dominated by the largest publishers and author groups.
Neither outcome is cleanly “good” for journalism. But they do point to different defaults. One makes it easier to argue that bots need their own relationship with your work. The other risks normalising the idea that a human subscription silently authorises a parallel economy of machine readers.
So what now?
This case will not, on its own, rescue independent media. New revenue has to come from somewhere; licensing, bundles, products we haven’t named yet. And also from AI labs. Paywalls are for people, not bots, and relying on a human-only meter in a machine-saturated web is obviously fragile.
But the outcome of Amazon v Perplexity will help decide whether you can even draw a clean line between the two.
If the law lets publishers treat bots that hide their identity and tunnel through subscriber accounts as trespassers, it becomes more realistic to demand that AI access your work on explicit terms. If it doesn’t, more of your business will be conducted in the shadows, between your readers’ devices and someone else’s model.
You don’t have to love Amazon to agree with their position, or even see the stakes. However the court ultimately rules, this is not just a spat about shopping assistants. It’s an early test of whether journalism remains something sold to people, or something silently strip-mined by machines.
In the end, lawsuits are not the answer. Writers’ Bloc is building the tools to protect your digital rights, and to monetise your content and IP.
If that’s something you take seriously, let’s chat. Feel free to reply to this email, or to book a brief call to chat specifics. Always here to help.