Keep Track of Google’s Crawl Cycle for 20 Bucks

As I’m sure you’re already aware, there is no shortage of online services that help you keep track of your site’s metrics. Many people rely on things like Analytics, Sitemeter, Technorati, Alexa and so forth to see how their blogs and sites are doing. Most of these services are designed to track the number of visitors, traffic sources, and things like that.

Some guy named Peter (he never gave a last name) put in this review request for SEOmeter, a new metrics tool that doesn’t track visitors or page views. Instead, keeps an eye on Google’s crawl cycle. Why should you care? Read on and find out.

What is Crawl Rate and Crawl Cycle?

Like I said, there are many numbers for webmasters to track when they want to see if their sites are on the rise or about to enter the deadpool. If you see that your blog is attracting more visitors, that’s a good thing. If you see that page views are up, that’s a good thing. Those are obvious enough, but keeping track of how often Google crawls your site could also prove to be very important too.


In SEOmeter’s own words:

How often search engine visits and crawls website content is an often neglected, but important metric for search engine optimization. Search engine’s crawing rate can be quantified by crawl cycle (CC), which is the time between two consecutive crawls by search engine robots. A short CC usually means that the website is “trusted” by search engines, and this trust, in turn, is reflected on the website’s search engine ranking.

For many blogs, search engines serve as a monstrous source of traffic, especially if you don’t have a strong readership like John Chow dot Com. As a result, getting ranked for certain keywords or keyword phrases can have a monumental effect on the level of traffic that your blog or site receives. For this reason, many of us strive to get on Google’s good side, partaking in all sorts of SEO strategies. By tracking their crawl cycle, you can better understand whether you’re in Google’s good graces or on their “let’s ban this guy” list.

Am I The Only One Who Sees Alexa?

The graph provided by the SEOmeter SEO tool looks an awful lot like the graphs that you see on Alexa. Let’s put the value of Alexa rankings aside for just a moment and look at the aesthetics. The SEOmeter graph spans three months at default (they’ll probably add more options later on), puts the name of the site in the upper-left corner, has a similar grid pattern, and boasts a watermarked image behind the graph itself.


Also like Alexa, the information shown is based on a three-month running average. The crawl cycle (CC) is defined as “the time between two consecutive crawls (i.e., cache updates) done by search engine robots).” For John Chow dot Com, you can see that Google’s crawl cycle here is a little more than 0.5. More specifically, the detailed information displayed beneath the graph reveals a 3-month crawl cycle of 0.57. This means that Google sends its robot every 0.57 days (almost twice a day). The lower this number, the better, because it means that your newest content shows up in search engine results sooner and Google “trusts” you more.


From the site-specific page, you also find out when was the most recent crawl and at what time of day do most crawls occur. For John Chow dot Com, you can see that 14% of crawling occurred between 1800 and 1859 GMT.

But It Ain’t Free

It’s up to you to decide whether Google’s crawl cycle on your site (or blog) is something worth tracking. If you do choose to monitor this metric, it’s going to cost you. Unlike so many free services out there — like the one’s mentioned in the opening paragraph — is not free and that’s why so many sites are not listed in its directory. (You can find more information in the FAQ and blog.)


It costs $20 to submit your site to SEOmeter and this buys you one year of site-monitoring. Every year after that costs you an additional $20. It’s a fairly minor investment, to be sure, but it makes for a significantly higher barrier to entry. SEOmeter might be better off taking the advertising route (they already have a series of 125×125 buttons in the sidebar) rather than charging people for their services. After all, the resulting data is totally public anyways.

Click Here to Submit a Website to SEOmeter

57 thoughts on “Keep Track of Google’s Crawl Cycle for 20 Bucks”

  1. Icheb says:

    How is this service not useless?

      1. Why pay when you can find out for free when your blog/site was cached last? Sure you won’t have all the graphs for an x to z period of time but you still get an image of Google’s trust. If the service would be free it would probably get more people to sign up, therefore more traffic and they would probably make more $$ from ads featured on their pages.

        1. I think this solution is mainly for small companies, which don’t have a big IT staff.

        2. mahdi yusuf says:

          he does have a point, but at the same time your not going to make 1 million dollars in 365 days! 😈

    1. Yep – I’m with ya. If it was free I’d use it but in it’s present for – I’m hardly interested.

  2. Liviu Pantea says:

    Nope, my first impression was that too is really much like Alexa.

    1. Steve! says:

      Same, I also don’t think that there is much point in this service. 😕

    2. Wish my GF wasn’t insistingly hitting the wall to call me, otherways I would have read this article and get a first impression too.

      Will read it tomorrow. Happy New year to y’all (in case I won’t have the chance to say that tomorrow).

  3. Ben says:

    If you use Googles Web Master tools they clearly tell you when the last crawl of your site was. Pop this into a quick and dirty spreadsheet and you have the same thing, with no investment of $20.00. Just your time, if your a new blogger like me this would be the route to go.

    1. MoneyNing says:

      Hmm…. If you really want it in that spreadsheet form then I’m not sure if the $20 is all that bad a price to pay. Your time could be better spent elsewhere.

      Having said that though, I wouldn’t pay the $20 either.

  4. I agree with your review John. I think a better model would be to offer the service for free, but sell advertising on the site or the results page.

    1. Contest Beat says:

      Agreed also – $20 is just a bit too “meh”

    2. Michael Kwan says:

      Once again, review by me, not by John.

      1. Mubin says:

        Poor Poor Michael, no one reads the damn by-line anymore

      2. mahdi yusuf says:

        i feel ya mike! it happens to me all the time too! on my blog!

      3. MoneyNing says:

        Eventually, we will figure out that you are pretty much doing most of the reviews now 🙂

    3. Robert says:

      I tend to agree. In looking through his server it appears that he uses the for the indexing. This is easy enough to set up on your Linux computer. He is however allowing up 100 free site monitorings as of the 30th of December.

  5. The idea can probably fly if they offer it for free to get email addresses and then upsell products like ‘how to make google crawl your site more often’ etc. etc.

  6. Austin says:

    This is free information… check out Google Web Master Tools!

  7. Karol Krizka says:

    There is a plugin for WordPress called Bot-Tracker that tracks the crawlers on your blog. It shows similar information and best of all it’s free!

  8. I jumped onto their blog and entered my site URL in time to get one of the free “first 100” slots they’re giving away. It confirmed what I already know – Google hates me. 😆

    1. MoneyNing says:

      Just write good content and eventually google will know about your site. Just submit your sitemap and use the auto sitemap generator plugin. I have a blog with 10 posts and its in the google index.

  9. David Chew says:

    If you can get it free then maybe is not a good choice but this one have to pay so maybe they have some good quality that is useful for new bloggers or new website.

  10. krazl says:

    GNU winz.. 🙄

  11. Patrick says:

    Don’t really see a lot of value in this. I don’t see why I need to pay $20 for this service. It should be free. Alexa is free and provides better data.

  12. Alan Johnson says:

    Just publish content on a regular basis and your website will end up being crawled often enough and, also, there are free tools which help you determine when your website was last crawled, if you are interested. 20 bucks is pocket change, but the service in itself is not useful enough to justify any kind of paid membership at this point.

    The domain name is decent enough and the site could be developed into a great resource but they need to implement a series of improvements before even thinking of charging for this service.

    Alan Johnson

  13. I’m thinking of trying this service. It looks really cool. 🙂

  14. This sounds fantastic and promising; however, there is one major flaw:
    It is ridiculous to charge price for a service that is completely dependent on Google’s crawling policy and if google, like yahoo and msn, suddenly decides to “not publicise” their crawl timestamp, this service will just go bust. What happens then to all the paid subscribers???

    Here is a quote from their FAQ which clearly states how their service is completely based on Google’s one policy:

  15. Sounds not very useful for small blog 😥

  16. Fat Man says:

    um, kinda dumb. i know my crawl cycle is 1 day or less. thats good enough for me. how did I find that out? I checked my cache date in google. it cost me a whopping $0 bucks. this guy should go ahead and make the service free, then get tons of traffic from users that dont know how to get their own crawl cycle and then charge for ads on the site. just copy alexa…all the way, you already did with the graph.

    1. mahdi yusuf says:

      fat man scoop has a point! now get down and shake your rump!! hehe

  17. Good review Michael.

    I think this site needs to wait until they can provide a service that will make Google crawl your site more often and then charge $20/year for that. I just can’t imagine how this data would be useful unless they show you how to get crawled more often.

    1. Alan Johnson says:

      How to get crawled more ofter? Simply give the spiders a reason to return (content posted on a regular basis). The more you post, the more you will let spiders know that they need to return on a regular basis. It’s actually as simple as that.

      Alan Johnson

      1. Then why would anyone care about a service that will tell them how often they are crawled?

        1. Alan Johnson says:

          Why would people care about this service? The answer is simple: they don’t 🙂

          Alan Johnson

  18. ATV Style says:

    Great review, as usual


    Although crawl rate is a valid metric it doesn’t apply to blogs in any meaningful way because blog index pages update frequently. There is a completely free and useful way of checking your crawl rate effects. Let me explain….

    example: you have 5 articles on your index page. Google the oldest article on your index page ( copy and paste the title ) and if its indexed, you can write another article. If its not yet, you’ve been a busy bee and writing a lot, wait until it is (or put more articles on index).

  19. Jovan says:

    dumbest idea I’ve seen

  20. Jovan says:

    also, if you right-click the snap shot image on the bottom, it says its from alexa….so yea this is alexa all the way.

    1. MoneyNing says:

      haha that’s a nice find 🙂

  21. Alan Johnson says:

    They seem to be accepting 100 websites for free now. Personally, I’m not interested but those who are might as well get this service for free.

    Alan Johnson

  22. Jason says:

    WOW! Could I pay a bit more for even less?

  23. Victor says:

    A far better solution is Crawltrack

    -crawlers and spiders visits tracking
    -entry pages
    -number of indexed pages
    -pages viewed by crawlers and spiders
    -visitors send by main search engines
    -number of backlinks

    It’s very handy and it’s FREE!

  24. bob says:

    $20 is too high for everyone. Unless it provides services/tips on how to improve google crawling rate, I don’t see why we need this service.

  25. Etienne Teo says:

    give more to the customers and the $20 will seem worth it.

  26. Mike Huang says:

    $20 is pretty steep for this type of service. I would suggest the owner lower it to $1/year, which would actually bring in a lot more purchases than $20/year.

    Anyways, where is Mr. Chow anyways?!?! There’s so many guest posts…LOL


    1. MoneyNing says:

      Good idea Mike 🙂 They will probably at least make their review money back.

  27. Justin says:

    The service may not be unique and may look a bit expensive for experienced webmasters or bloggers. However, there are many out there who are new to this world like me and may find it worth using at that cost.

    1. MoneyNing says:

      Good luck. I wonder if they will at least get $20 to cover their cost of the review from this 🙂

  28. Call me old-fashioned, but I really like Analytics. The price is unmatched as well. 😆

  29. I’m still trying to figure out what this service is actually good for.

  30. seo audit says:

    I don;t see the value of this service

  31. Looks like alot of their service is based around ALEXA which results can be cheated. Therefore this service isn’t worth the $20/year plus the fact you can get this info from virtually an webmaster tools place online.

  32. vga2usb says:

    I noticed that it takes a number of crawlings to have site properly indexed along all Google data centers. Checking the website with query often shows different results, depending on which server you hit. Now, instead of paying for any service like that the money and time would be better spent on adding contents and link building. IMHO.


  33. Mike Huang says:

    Well, Mr. Kwan was right. SEO Meter does use the same information Alexa provides, but switches it around to charge people.

    If you take a look at the free account I got here at SEO Meter,

    You can see that it’s the same thumbnail used on Alexa


    1. VS2x8 says:

      huh? i think their service offers crawl stats, not thumbnail. what does thumbnail have to do with that?

  34. Abraham says:

    Couldn’t a programmer check the user agent of the requester? I think google’s spider identifies itself in the UA or something like that.

Comments are closed.