Firstly id like to thank all those that took part in the clouflare disscusion So im here with another question / discussion

Hi all ive been researching this online for a while now. Blocking back-link tools and more specifically bandwidth leechers and content scraper spiders. I have used rbt.txt although they bypass that . On Apache my web based server ive used the ht access file to block spiders ( and redirect them to other websites ), by their user agent strings and also their ip ranges.

It seems to me however they do not always abide by the rules and can typically name them themselves ( the spiders ) whatever they want , even google bot. It typically seems like a losing online battle right now.

So i was thinking of trying a whitelist but this seems to risky to me. Blocking everything but google could have some bad consequences for your website , i think google visits anonymously sometimes as well.

Having thought it over attaining my backlinks will be no picknick , and if the webmaster is foolish enough to try them in bulk google might flag them because i've taken months to build them and they all have unique content. I think while they do that , my time is better spent on bettering my websites position. ( this is where im at right now)

Although i have not been affected yet ,content scrapers and bandwidth leechers are also something i will have to learn about, do they operate in similar fashion ?

Any solutions to would be greatly appreciated ( its a shame we have to spend time on things like this )