brainbaking/content/post/2023/04/is-your-website-training-ai.md

48 lines
6.3 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

---
title: "Is Your Website Training AI?"
date: 2023-04-23T14:45:00+02:00
categories:
- webdesign
---
The answer will inevitably be _yes_. Both [Jan-Lukas](https://jlelse.blog/links/2023/04/secret-list-of-websites) and [Ton](https://www.zylstra.org/blog/2023/04/how-many-tokens-from-your-blog-are-in-googles-llm/) wrote about the Washington Post's [Inside the secret list of websites that make AI like ChatGPT sound smart](https://www.washingtonpost.com/technology/interactive/2023/ai-chatbot-learning/) today. The Post investigated a massive dataset called the English [Colossal Clean Crawled Corpus](https://www.semanticscholar.org/paper/Documenting-the-English-Colossal-Clean-Crawled-Dodge-Sap/40c3327a6ddb0603b6892344509c7f428ab43d81?itid=lk_inline_enhanced-template) or C4, which drives language model AIs such as Google's T5 and Facebook's LLaMA. OpenAI's ChatGPT presumably uses forty times as much data but OpenAI doesn't want to disclose where it comes from.
You can search inside C4 to see if your website was crawled---and thus perhaps unwillingly was used as a tool to help improve corporate AI models. _Brain Baking_ apparently contributed 5000 tokens. The CCBot crawler started in 2019. Fire a query via https://c4-search.apps.allenai.org/?q=brainbaking.com to poke around in the dataset yourself.
What's most concerning is the following part of the Post's article, subtitled _A trove of personal blogs_:
> The data set contained more than half a million personal blogs, representing 3.8 percent of categorized tokens. [...] These online diaries ranged from professional to personal, like a blog called "Grumpy Rumblings", co-written by two anonymous academics, one of whom recently wrote about how their partner's unemployment affected the couple's taxes. [...]
But when it comes to data owned by big tech companies, the doors are somehow kept firmly shut:
> Social networks like Facebook and Twitter---the heart of the modern web---prohibit scraping, which means most data sets used to train AI cannot access them. Tech giants like Facebook and Google that are sitting on mammoth troves of conversational data have not been clear about how personal user information may be used to train AI models that are used internally or sold as products.
Why should we as personal bloggers allow this kind of behavior while Facebook and Twitter intentionally do not? Why should we help already super-rich companies generate a model that will inevitably be used to make those companies even richer? As Chris Coyier also [mentioned](https://chriscoyier.net/2023/04/21/the-secret-list-of-websites/), gaint internet vacuum suckers like C4 gobble up your data but at the same time neglect to ask let alone tell the authors their content is being used and tell the users of the product they're creating where the data came from.
The least crawlers could do is offer a way to opt out---or even better, opt in. If you take a look at the FAQs of [Common Crawl](https://commoncrawl.org/big-picture/frequently-asked-questions/), the non-profit behind the CCBot that helped generate the C4 dataset, there _is_ a way to opt-out, by adding the following entry in your site's `robots.txt`:
```
User-agent: CCBot
Disallow: /
```
I have also blocked the following user-agents: `ChatGPT-User`, `Mediapartners-Google`, `AdsBot-Google`, `adidxbot`.
Of course, it would be very naive of me to think the problem is solved now: first, the damage is already done and there seems to be no way to remove your site from an existing data set (why?); second, who says crawlers will play ball and obey your `robots.txt` file?; third, should we be blocking CCBot in the first place? Common Crawl states "Our goal is to democratize the data so everyone, not just big companies, can do high quality research and analysis.". Then again, by providing such a data set for "everyone", it also can be easily abused by "everyone", including big tech. So I don't know. I'm interested to hear the opinion of an expert on this.
It is clear that crawlers keep track of the source URLs; otherwise I wouldn't be able to find my site entries in the above C4 search link. So why not at least provide these source links as citations to your users? That's called common courtesy. In academia, blatantly copying text of others without providing accurate references will get you into trouble. Of course, most of big tech's not-so-secret rules resolve around stealing and paywalling your content, so why should language learning models be any different.
---
And then we haven't talked about the ignoring of licenses yet. Last time this happened, [I gave up GitHub](/post/2022/07/give-up-github) as CoPilot did exactly the same. I have the feeling ethics isn't something that most American tech companies deeply care about---and those that (try to) do are just sacked en masse.
Ever since the launch of this site, I've been an avid follower of Leo Babauta's [uncopyright mindset](https://mnmlist.com/uncopyright-and-a-minimalist-mindset/). Under `/no-copyright-no-tracking`, I wrote:
> I've always detested the this is mine!-mindset, especially when it comes to intellectual property. Everyone benefits if everything is open and everyone can build upon each others work. A possible financial loss is not an excuse. Leo has found copyrights not to be particularly helpful, so he simply got rid of them. He sells thousands of ebooks monthly. You have the right to share them with friends. He would rather have you buy them, but this way his work reaches a broader audience.
In light of the recent "advancement" in the field of commercial AI, I'm afraid I have to change that. I hate resorting to a confusing Creative Commons license, but MIT is specifically geared towards software instead of writing, and the absolute least I want to enforce is **attribution**. So henceforth, _Brain Baking_ is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
Which, of course, changes little. Microsoft happily ignored any `LICENSE` files when gobbling up repositories for GitHub CoPilot, and web scrapers/bots OpenAI and the like utilize are sure to do the same. Still, at least on my website it states that while you can do whatever you want with what's written here, you _should_ have the courtesy to correctly attribute the source.
I still like and believe the _Sharing Is Caring_ mantra, but please don't mistake it for _Stealing Is Moneymaking_.