NYT Sends Perplexity Cease-and-Desist Letter Over AI

The newspaper says the company uses its writing without permission, according to a Wall Street Journal report -- as other media companies forge lucrative deals with AI companies to license copyrighted material.

Shane Snider, Senior Writer, InformationWeek

October 15, 2024

3 Min Read
Reading the online edition of the New York Times.
Ian Dagnall via Alamy Stock

The New York Times sent a cease-and-desist letter to Amazon-backed Perplexity, demanding the AI startup stop the use of its content, according to a report from the Wall Street Journal.

This is not the first time the iconic publication has lashed out at an AI firm. Last year, it filed a lawsuit against ChatGPT parent OpenAI over the firm’s alleged use of millions of articles in model training. While OpenAI said the training constituted legal fair use of published content, the company set out to forge several large media content deals.

Perplexity’s artificial intelligence offers a different service than the popular ChatGPT chatbot. The company uses AI to create an “answer engine” more closely competing with traditional search engines.

Perplexity AI spokesperson Sara Platnick tells InformationWeek in an email that the company plans to respond to the notice by the Oct. 30 deadline. The company contends that it is not gathering text to train models. “We aren’t scraping data for building foundation models, but rather indexing web pages and surfacing factual content as citations to inform responses when a user asks a question,” she says. “The law recognizes that no one organization owns the copyright over facts.”

NYT: Perplexity Subverted Scraping Protections

Related:More Newspapers Join the Copyright Battle Against OpenAI, Microsoft

The Times says Perplexity found a workaround for its anti-scraping and anti-bot measures. The paper alleges “unlawful use” of its articles and demands to know how and why Perplexity still cites the publication.

Perplexity CEO Aravind Srinvas told the Wall Street Journal that the startup wants to collaborate with media companies like NYT. “We are very much interested in working with every single publisher, including The New York Times,” he told WSJ. “We have no interest in being anyone’s antagonist here.”

But NYT is not the only publication up in arms over Perplexity’s business practices. Conde Nast, owner of Wired, The New Yorker, Vogue, and other iconic publications, also sent Perplexity a cease-and-desist letter. Earlier this year, Forbes accused Perplexity of creating “knockoff stories” with “eerily similar wording” and “entirely lifted fragments” of its articles.

Perplexity argues that its technology is a net positive for journalism, citing the creation of its revenue sharing program. Platnick says Perplexity has “a rich and open information ecosystem … it gives news organizations the ability to report on topics that were previously covered by another news outlet.”

She adds, “We deeply value journalism…”

Related:What the NYT Case Against OpenAI, Microsoft Could Mean for AI and Its Users

Dozens of lawsuits have been filed by artists and content creators and owners over AI’s use of copyrighted material and the potential for misuse. In August, US District Judge William Orrick ruled that Stability AI, Midjourney, DeviantArt, and Runway AI were violating artists’ rights by illegally storing their works. But the judge threw out claims over copyright violation.

A flurry of lawsuits by authors, publishers, and artist advocates were filed in 2023 alone. Those cases are ongoing.

Daniel Colson, co-founder and executive director of the AI Policy Institute, says NYT’s complaint against Perplexity illustrates a need for stronger AI regulations. “Perplexity’s approach is a prime example of the ‘move fast and break things’ mentality that is all too common among Silicon Valley startups, prioritizing rapid innovation over potential legal obstacles,” Colson tells InformationWeek in an email interview. “This aggressive startup culture is often viewed as virtuous by founders and investors… While effective for low-stakes technological developments, this mindset is poorly suited for the creation of powerful and potentially dangerous AI systems.”

He adds, “There is an urgent need for appropriate guardrails in AI development to balance innovation while mitigating risks…”

Related:Anthropic Hit with Copyright Lawsuit Over LLM Training

InformationWeek has reached out to The New York Times for comment.

About the Author

Shane Snider

Senior Writer, InformationWeek

Shane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights