The fate of AI agent product startups depends entirely on Amazon in the US.

This article is machine translated
Show original

When AI gains access to a user's account and password, is it exercising the user's rights or engaging in unauthorized intrusion?

Author: Liu Honglin, Attorney at Law, Mankiw Blockchain Legal Services

In December 2025, ByteDance launched a preview version of Doubao Mobile Assistant, where AI could directly operate the phone—users could say a sentence, and it would automatically open platforms such as JD.com, Taobao, and Meituan to compare prices and assist in completing the ordering process. In January 2026, Alibaba's Qianwen App launched the "Order Takeout with One Sentence" function. Users could say "Order me a cup of coffee," and AI would automatically complete the selection of stores, price comparison, ordering, and payment, without requiring users to switch to another app.

Both products aim to transform AI from simply "answering questions" to "doing things for you." Many entrepreneurs see this as the bright future ahead, but a recent ruling by a US court has forced the industry to re-examine a fundamental issue:

When AI gains access to a user's account and password, is it exercising the user's rights or engaging in unauthorized intrusion?

The story begins in the Amazon.

On March 9, 2026, the U.S. District Court for the Northern District of California ruled that AI company Perplexity was liable under the Computer Fraud and Abuse Act (CFAA) and Section 502 of the California Criminal Code for accessing users' password-protected accounts through its browser Comet with user permission but without authorization from Amazon.

The court issued a temporary injunction prohibiting Perplexity from continuing to use its AI agent to access Amazon's systems and ordered it to destroy all data it had acquired.

The core of this case establishes that user authorization cannot replace platform authorization, and obtaining user credentials does not equate to obtaining platform access permission.

This principle will almost certainly determine the fate of all future AI agent products.

What exactly happened?

In 2025, AI search company Perplexity launched a browser called "Comet," whose main feature is "AI Agent." When users want Comet to help them shop on Amazon, they only need to provide their account credentials, and Comet will log in, browse, compare prices, and even place orders as the user.

This might sound like just an "AI assistant" for users, but Amazon doesn't see it that way.

Amazon has filed a lawsuit against Perplexity, alleging that its actions constitute unauthorized computer intrusion. Amazon argues that although Comet obtained individual user consent, it never received authorization from Amazon as a platform, and its actions essentially constituted "unauthorized entry" into Amazon's computer systems.

The Computer Fraud and Abuse Act (CFAA) is the primary federal law in the United States regulating computer intrusion. Under this act, five elements must be met to constitute an offense:
Intentional access to a computer, unauthorized or excessive access, obtaining information thereby, involving interstate or foreign communications, or causing damage of at least $5,000 within one year.

The court determined that the evidence provided by Amazon preliminarily met the aforementioned requirements.

Regarding the determination of "unauthorized," the court cited precedent set by the Ninth Circuit in Facebook, Inc. v. Power Ventures, Inc. (2016). That case established the key principle: "Consent granted by a user is insufficient to constitute a valid authorization after the platform has expressly revoked it."

In other words, user consent and platform authorization are two separate issues and cannot be substituted for each other.

As a supplement to state law, Section 502(c)(7) of the California Criminal Code prohibits "knowingly and without permission access to or facilitating access to any computer, computer system, or computer network." The court held that, for the same reasons analyzed by the CFAA, Amazon also demonstrated a high probability of success in this charge.

The court established one thing in this case:

User authorization ≠ platform authorization.

This means that even if a user gives you their account password and asks you to operate it for them, your access is illegal if the platform says "disagree".

This is completely different from the logic many people used to have. Many people thought: as long as the user agrees, what's wrong with AI doing things for the user? Now the court is clearly telling you— no.

After discussing this case, the question that entrepreneurs care about most comes up: How can I make my AI product secure?

Attorney Honglin first advises everyone to adhere to these three red lines:

Red Line One: Using users' account passwords to operate e-commerce platforms. This is a pitfall that Perplexity encountered. If you use a user's account password to log into platforms like Amazon, JD.com, or Taobao, and then help the user complete shopping, order placement, or leave reviews, and the platform explicitly opposes this behavior—then you are flirting with the edge of illegality.

Red Line Two: Accessing areas explicitly marked as "password protected" on the platform. The court specifically emphasized this point in this case: Comet accessed Amazon's "password-protected sections." What does this mean? It means that if it's just scraping information from publicly available web pages, the risk might not be so high; but once it involves areas requiring passwords, such as the member area, order management, and personal center, the legal risks increase dramatically .

Red Line Three: Continuing to operate after receiving a platform warning. In this case, Amazon's cease and desist letter became key evidence in determining "unauthorized" access. This also serves as a reminder to AI companies: continuing to operate after receiving a platform warning will be considered knowingly violating the law and significantly increasing the risk of losing the case.

Secondly, based on the logic that user authorization is not equivalent to platform authorization, it is recommended that a compliant AI agent product establish a dual authorization review mechanism:

The first layer is user-level authorization and confirmation. Your product does indeed need explicit user consent; that's the bottom line. But that alone isn't enough.

The second layer is platform-level authorization review. In the initial stages of product conception, the development team cannot only consider "what users need," but must also answer "whether the platform allows it." If you haven't obtained explicit authorization from the platform, don't do it, even if the user begs you to do it for them .

Finally, try to use the "official API interface" route. In the future, some platforms that hope to disrupt the old order will begin to explore a "controlled openness" model, providing limited proxy access capabilities through official API interfaces, which can both meet user needs and maintain platform control.

For example, in the Chinese market, Qianwen has integrated with Taobao and Alipay, following the "integration within the ecosystem" route—by calling official interfaces and tools, it has already obtained "access permission" from the Alibaba system.

Using the official API interface would significantly reduce the legal risks compared to directly simulating user login.

After all, only the companies that survive have the right to talk about disruption.

Disclaimer: As a blockchain information platform, the articles published on this site represent only the personal views of the authors and guests and do not reflect the position of Web3Caff. The information contained in the articles is for reference only and does not constitute any investment advice or offer. Please comply with the relevant laws and regulations of your country or region.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments