Common Crawl and search engines

Common Crawl is the closest thing we have to an open index, though it doesn’t meet your requirement of ignoring robots.txt for corporate websites while obeying it for personal sites. Unfortunately, being open and publicly available means that people use it to train LLMs. Google did this for initial versions of Bard, so a lot of sites block its crawler. Most robots.txt guides for blocking GenAI crawlers include an entry for it now.

Common Crawl powers Alexandria Search and was the basis of Stract’s initial index, both of which are upstart FOSS engines.

A similar EU-focused project is OpenWebSearch/Owler.


Originally posted on seirdy.one: See Original (POSSE).

3 Likes