In May,Bayo Reddit announced it would allow OpenAI to train its models on Reddit content for a price. Now, according to The Verge, Reddit will block most automated bots from accessing, learning from, and profiting from its data without a similar licensing agreement.
Reddit plans to do this by updating its robots.txt file, the "basic social contract of the web" that determines how web crawlers can access the site. Most nascent AI companies (including, at one point, OpenAI) train their models on content they've scraped from across the web without considering copyright or the Terms of Service of individual sites.
SEE ALSO: Reddit's traffic is way up – but why? It's Google.Per The Verge's Alex Heath, search engines like Google got away with this form of scraping thanks to the "give-and-take" of Google sending traffic back to individual sites in exchange for the ability to crawl them for information. Now, AI companies are tipping the balance by taking that same information and providing it to users without sending them back to the sites the information came from.
Reddit's chief legal officer, Ben Lee, told The Verge that the parameters of robots.txt are not legally enforceable but that publicizing Reddit's intention to enforce its content policy is "a signal to those who don’t have an agreement with us that they shouldn’t be accessing Reddit data."
In a blog post about the change, Reddit noted that "good faith actors – like researchers and organizations... will continue to have access to Reddit content for non-commercial use." These include the Internet Archive, home to the Wayback Machine.
(Editor: {typename type="name"/})
Which version of 'Blade Runner' should you watch?
Bad news for YouTube creators who depend on Patreon
Google killed NFC Smart Unlock for Android, and users are furious
'Avatar' sequels add a group of kids because that's how you make a hit in 2017
'Star Trek: The Next Generation' premiered 30 years ago, and Trekkies are paying tribute
接受PR>=1、BR>=1,流量相当,内容相关类链接。