UPDATE: I had always (wrongly) thought that Google wouldn’t index pages that were blocked in robots.txt. But John Mueller clarified this for me (thank you John): robots.txt will control the crawling BUT NOT the indexing. Good explanation of that is here:
Google is now sometimes indexing pages DESPITE explicitly being blocked in robots.txt:
The screenshot above is from the new Search Console index coverage report, and shows that Google is choosing to index 36 pages that they see are explicitly blocked in robots.txt.
Have you taken one of the quizzes from Women.com on Facebook lately? Been impressed with your brilliance…got 100% on the quiz?
Well, maybe you ARE brilliant, and maybe you DID get 100%. But if you have a creepy feeling you’ve been used–you’re unfortunately right.
Women.com has been putting up a number of quizzes lately where, if you get anywhere close to most of the answers correct, it’ll report you as getting 100%. Why would they do this?
Well…so you’ll share it on Facebook. If you scored 65%, you probably wouldn’t be proud enough to share it. So, by faking your score to make you look like a rockstar, they greatly increase the chance you’ll share their quiz on your timeline.
I’ve done this quiz (and a couple of others they’ve run), and deliberately answered 3-4 questions incorrectly, and still….100%. I then tried answering all of them deliberately wrong, and didn’t show 100%. I’m guessing they did this to make it look legit for someone who either was testing them, or knew they knew hardly any of the right answers.
They’re using you, by deception, to market their site for them on Facebook, for free.
One of my clients discovered this latest dirty trick for ranking, and it’s being used by dirtbags who have pirated content from legit publishers.
If you do this search, you’ll see the 3 of the top organic listings are from a very trusted domain…Google.com.
The problem is, each of these is a user-generated MyMaps page, and it’s just a crappy page with a link to the download and some text.
Reporting this to Google now…and, the publisher of the legit content is submitting a DMCA take-down.
I did a post for the Moz main blog on XML sitemaps, all the ways people do them wrong, and how to use them to diagnose indexation issues.
Read the full post here.
Here’s an interesting case of Google’s local organic algorithm failing to produce the right results, and it tells us a bunch about how they’re trying to produce those local results in the organic results outside of the 3-pack.
I’m sitting here near Sunriver, Oregon, and I search for “restaurants near me”. The 3-pack looks pretty good…it’s got the pushpin very accurately located for me (within yards, in fact, despite the fact that I’m on a PC connected via my cable internet provider, not my cellphone with GPS).
Continue reading Outside of the 3-Pack, How Does Google Find Local Results?
Despite Google’s claims that as of Penguin 4.0 last fall, they just ignore the bad links, here’s concrete evidence to the contrary.
This is the traffic from one of my clients’ sites. On March 2nd, I submitted a pretty major disavow file to undo some really bad ancient history. And no, they did NOT have a manual penalty showing in Search Console.
There seems to be a faction out there who’s trying hard convince the world that the Holocaust never happened–using SEO to make the Holocaust-denial websites rank highly for the search term is the holocaust real. You can help combat this, if you like, by doing what I just did here…link to that BBC article with the anchor text “is the holocaust real”.
My friend Mike King an ex rapper turned Super SEO Nerd is one of the most technically skilled hard-core SEO people I know. Continue reading The Technical SEO Renaissance: a brilliant post from Mike King
My friend Aleyda Solis just published this super useful .htaccess tool for generating many of the common .htaccess rules needed for SEO: things like redirecting non-slash-ending URLs to the ones ending in slashes; redirecting from the non-www subdomain to the www one; redirecting http to https; even the code for the custom 404 error handler.
Rand Fishkin of Moz does an excellent job of explaining some of the conflicts and tradeoffs (and synergy) between user experience (UX) and search engine optimization (SEO):