Author Archives: Ben

Google TKOs Adwords API developers

In what can only be described as a blatant attempt to control Adwords data, Google has started to delete Adwords API developer tokens en-masse based on thresholds and reasons seemingly only to them.

I received the following email this afternoon, which I initially thought was an error (emphasis mine):

As stated in the AdWords API Terms and Conditions (Section II.4), we periodically review AdWords API activity. We noticed that there has been low usage of the AdWords API developer token associated with your My Client Center (MCC) manager ID XXXXXXXXXX in the last 30 days. For the purpose of ensuring quality, improving Google products and services and compliance with AdWords API Terms and Conditions, we have disabled this token.

If you wish to re-apply for the token, please visit the AdWords API Center in your account. Remember to answer the following in detail if you re-apply for the token:

  1. Describe the uses of your API application or tool with specific examples. For instance, account management or bid optimization.
  2. Who is or will be using your API application or tool? For example, colleagues in your company or advertisers or agencies to whom you are selling the tool.
  3. Please attach screenshots of your API application or tool. If the application or tool is yet to be developed, please provide relevant design documentation.
  4. Please provide a list of clients that will be using your API application or tool in an automated way.

Please know that that we will take between 5 and 6 weeks to process all developer token re-applications.

Regards,
The AdWords API Team

TL;DR – We got screwed for ‘low volume’ usage, we can request re-inclusion but it will take 5-6 weeks.

It would be great to know what Google considers to be ‘low volume’. The developer token in question is used by our research tools (which matches the original application description) and makes under 500,000 API calls per month. Our usage is mostly all at once (pulling fresh data) plus ad-hoc usage throughout the month as new data is requested (sure, this is mostly KeywordEstimator queries). The thing is, there are people on the Adwords support forums who are making in excess of 8 million API calls per month and have also been disabled – so what exactly constitutes low volume, and how much data do we need to pull in order to get re-included?

My theory is that these blanket rejections are actually nothing at all to do with volume of usage. Instead, I think they’re entirely to do with specific usage patterns only. Google doesn’t like 3rd party apps using their keyword volumes for automated keyword research as they see this as ‘gaming’ – instead they want to control the spread of this data through captcha-controlled properties of their own.

This is of course great news for tools like Wordtracker who, once they get their own access back (and they will), will likely clean up in the SEO market as everyone seeks an alternative method to accessing Adwords data outside of the Google API.

SEO Sunday: Sep 11 2011

This is the first of a series of weekly posts I plan to do as a kind of ‘best of the week’ run down of what I found interesting on the web. If you too find it interesting and valuable, I’d love to know in the comments!

  • Sneaky Keyword Research
    A great little 5-slide deck from @rishil on some outside-the-box techniques for gaining some valuable keyword research/traffic data.
  • The Reason GA Launched Multi-Channel Funnels
    Ever had a client ask the question of why their Adwords data doesn’t match what GA tells them? just point them at this fantastic post.
  • Hire a botnet! (or just loads of cheap proxies)
    I first saw this via a tweet from @richardbaxter of SEOgadget. It looks pretty dodgy if this is anything to go by, but I can think of TONNES of uses for a service like this where you need access to a large pool of IPs for one-off tasks (did someone say search volume manipulation? [yeah, it happens]).
  • Narrative Science creates machine-written copy that looks human
    A good piece from the NYTimes covering a startup that’s using AI to create unique copy with machine-chosen ‘angles’ that is virtually indistinguishable from human copy. Certainly of interest to anyone in the SEO space.
  • Winning at SEO with duplicate content
    I wasn’t at BrightonSEO but by all accounts this preso went down well. It covers an interesting topic and one that definitely warrants a bit of attention and research time in the near future.
  • How I wrote 500,000 unique GoogleBase Descriptions in 2 hours
    Another post tackling dupe content, but from a completely different angle. Anyone who works on enterprise sites will have hit this problem on multiple occasions and not just for GoogleBase purposes – mostly for repetitive manufacturer blurb on retail products. I took a similar route recently on a client site, albeit with a slightly different process and implementation.

Google using H2 tags in SERP descriptions

While doing some research in a niche that I have a couple of affiliate sites I noticed a weird SERP snippet showing up for an about.com URL:

Google using H2 tags

It starts up with truncated meta description and then goes straight into using the H2 tags on the page along with a “5+ items” indicator. I’ve seen Google substitute heading tags for the title shown in the SERP before, but never for the snippet and certainly never in this manner.

Is it new?

UPDATE: looks like this is a perm change to the SERPs and not just a bucket test – New snippets for list pages (Google Search blog).

Is Caffeine Behind Broken UK SERPs?

Google LogoIn a post on the Webmaster Central Blog, Google have announced the immediate availability of a test sandbox for what they are calling “Caffeine”. Described as “the first step in a process that will let us push the envelope on size, indexing speed, accuracy, comprehensiveness and other dimensions”, Caffeine is a looking glass into the future SERPs.

The real question though is: is Caffeine the reason the UK SERPs are broken?

Continue reading