Some very valuable information in this study from SEMRush. For me, the key is in this excellent chart:
SummaryMOST webmasters of medium-sized and bigger sites are getting hit by one of these schemes right now…and some of you are getting hit by all three. And just because you’re not seeing a precipitous drop in traffic (yet!) doesn’t mean your backlink profile isn’t edging its way over the precipice where Google starts losing trust in your site.
Continue reading 3 Train Wreck Link Schemes You Didn’t Even Know Had Hit You
I discovered TripleChecker recently, when they did a freebie intro scan of my site and found a whole bunch of typos and broken links. Their interface makes it easy to see what’s wrong, and links to the page with the problem. Super helpful at keeping your site clean, professional, and functioning as it should. Have it check your website monthly (up to 1000 pages) for $14.99/month, or weekly (up to 2,500 pages) for $19.99/month. Sure, you can manually do a crawl test regularly with something like Screaming Frog to find the broken links, and you can go page by page and feed the text into a tool like Grammarly, but this wraps it all in a neat package that you don’t have to remember to do each month. I liked it enough, I added it to my Resources menu, and to my standard set of site audit recommendations.
This podcast with Stan Ventures is a Q&A with me talking about negative SEO, what patterns we’re seeing, neg SEO vs. hackers/scrapers, why many backlinks tools cannot see the negative SEO, etc. https://www.stanventures.com/seo-podcast/safeguard-website-from-negative-seo/
UPDATE: I had always (wrongly) thought that Google wouldn’t index pages that were blocked in robots.txt. But John Mueller clarified this for me (thank you John): robots.txt will control the crawling BUT NOT the indexing. Good explanation of that is here: —————- Google is now sometimes indexing pages DESPITE explicitly being blocked in robots.txt: The screenshot above is from the new Search Console index coverage report, and shows that Google is choosing to index 36 pages that they see are explicitly blocked in robots.txt.
I did a post for the Moz main blog on XML sitemaps, all the ways people do them wrong, and how to use them to diagnose indexation issues. Read the full post here.
Despite Google’s claims that as of Penguin 4.0 last fall, they just ignore the bad links, here’s concrete evidence to the contrary. This is the traffic from one of my clients’ sites. On March 2nd, I submitted a pretty major disavow file to undo some really bad ancient history. And no, they did NOT have a manual penalty showing in Search Console.
My friend Mike King an ex rapper turned Super SEO Nerd is one of the most technically skilled hard-core SEO people I know. Continue reading The Technical SEO Renaissance: a brilliant post from Mike King
My friend Aleyda Solis just published this super useful .htaccess tool for generating many of the common .htaccess rules needed for SEO: things like redirecting non-slash-ending URLs to the ones ending in slashes; redirecting from the non-www subdomain to the www one; redirecting http to https; even the code for the custom 404 error handler.