I discovered TripleChecker recently, when they did a freebie intro scan of my site and found a whole bunch of typos and broken links. Their interface makes it easy to see what’s wrong, and links to the page with the problem. Super helpful at keeping your site clean, professional, and functioning as it should. Have it check your website monthly (up to 1000 pages) for $14.99/month, or weekly (up to 2,500 pages) for $19.99/month. Sure, you can manually do a crawl test regularly with something like Screaming Frog to find the broken links, and you can go page by page and feed the text into a tool like Grammarly, but this wraps it all in a neat package that you don’t have to remember to do each month. I liked it enough, I added it to my Resources menu, and to my standard set of site audit recommendations.
This podcast with Stan Ventures is a Q&A with me talking about negative SEO, what patterns we’re seeing, neg SEO vs. hackers/scrapers, why many backlinks tools cannot see the negative SEO, etc. https://www.stanventures.com/seo-podcast/safeguard-website-from-negative-seo/
UPDATE: I had always (wrongly) thought that Google wouldn’t index pages that were blocked in robots.txt. But John Mueller clarified this for me (thank you John): robots.txt will control the crawling BUT NOT the indexing. Good explanation of that is here: —————- Google is now sometimes indexing pages DESPITE explicitly being blocked in robots.txt: The screenshot above is from the new Search Console index coverage report, and shows that Google is choosing to index 36 pages that they see are explicitly blocked in robots.txt.
One of my clients discovered this latest dirty trick for ranking, and it’s being used by dirtbags who have pirated content from legit publishers. If you do this search, you’ll see the 3 of the top organic listings are from a very trusted domain…Google.com. The problem is, each of these is a user-generated MyMaps page, and it’s just a crappy page with a link to the download and some text.
Reporting this to Google now…and, the publisher of the legit content is submitting a DMCA take-down.
I did a post for the Moz main blog on XML sitemaps, all the ways people do them wrong, and how to use them to diagnose indexation issues. Read the full post here.
Despite Google’s claims that as of Penguin 4.0 last fall, they just ignore the bad links, here’s concrete evidence to the contrary. This is the traffic from one of my clients’ sites. On March 2nd, I submitted a pretty major disavow file to undo some really bad ancient history. And no, they did NOT have a manual penalty showing in Search Console.
My friend Mike King an ex rapper turned Super SEO Nerd is one of the most technically skilled hard-core SEO people I know. Continue reading The Technical SEO Renaissance: a brilliant post from Mike King
My friend Aleyda Solis just published this super useful .htaccess tool for generating many of the common .htaccess rules needed for SEO: things like redirecting non-slash-ending URLs to the ones ending in slashes; redirecting from the non-www subdomain to the www one; redirecting http to https; even the code for the custom 404 error handler.
Rand Fishkin of Moz does an excellent job of explaining some of the conflicts and tradeoffs (and synergy) between user experience (UX) and search engine optimization (SEO):