close
close

The Cloud Band BreitenAlträtraum – Computerworld

Then came search engine spiders. (Note: Spiders, browsers and crawlers are interchangeable. They are also all bots.) Sure, they ate bandwidth, but the assumption was also that the search was advantageous – it brought customers and new prospects.

Search spiders mostly respected Robots.txt instructions on which websites they could visit and which pages on the websites they could crawl. Then search providers knew that most websites welcomed their visits, they more or less respected the restrictions.

This brings us to this day, when we find that the companies do behind LLMS – through various sneaky mechanisms – not Respect them that do not enter any signs. And their crawlers do not deliver the perceived value of human visitors or even search engine spiders. Instead of taking new prospects on a company page, steal data, use for your own apps and then sell them to others.

Leave a Comment