Get 7 free articles on your free trial Start Free →

7 Proven Strategies to Check Keyword Position Using the Google API

17 min read
Share:
Featured image for: 7 Proven Strategies to Check Keyword Position Using the Google API
7 Proven Strategies to Check Keyword Position Using the Google API

Article Content

Understanding where your pages rank for target keywords is foundational to any SEO strategy. But manually checking positions across hundreds or thousands of queries is neither scalable nor accurate. You end up with snapshot data that's already stale by the time you act on it.

Google offers several APIs that let you programmatically retrieve keyword position data, from the Search Console API to the Custom Search JSON API. The challenge is knowing which API fits your use case, how to set it up properly, and how to turn raw data into actionable insights.

This guide walks through seven battle-tested strategies for using Google APIs to check keyword positions at scale. Whether you're a marketer building a lightweight rank tracker, a founder monitoring competitive visibility, or an agency managing multi-site portfolios, these approaches will help you automate position tracking, reduce reliance on expensive third-party tools, and integrate ranking data directly into your workflows.

Beyond traditional SERP rankings, we'll also explore how to extend your monitoring to AI-powered search platforms. In 2026, knowing where you rank on Google is only half the picture.

1. Set Up the Google Search Console API for Accurate Position Data

The Challenge It Solves

Most rank tracking tools give you estimated positions based on their own crawling infrastructure. The Search Console API gives you something better: actual impression-weighted position data directly from Google, tied to your verified properties. There's no estimation involved. This is the most accurate source of keyword position data available to any SEO practitioner.

The Strategy Explained

The Search Console API's searchAnalytics.query endpoint returns impressions, clicks, click-through rate, and average position for any verified property. It supports dimensions including query, page, country, device, and searchType, and provides up to 16 months of historical data.

Authentication uses OAuth 2.0. You'll need to create a project in Google Cloud Console, enable the Search Console API, and generate credentials. For server-side automation, a service account with delegated access to your Search Console property is the cleanest approach. Once authenticated, you can query the endpoint with a date range, dimension grouping, and optional filters to pull exactly the data you need. For a broader overview of monitoring your rankings, see our guide on how to check your position in Google search.

Implementation Steps

1. Create a Google Cloud project, enable the Search Console API, and generate OAuth 2.0 credentials or a service account key.

2. Verify your property in Google Search Console and grant your service account access as a full user or owner.

3. Make your first searchAnalytics.query POST request with a date range, dimensions set to ["query", "page"], and a row limit of up to 25,000 per request.

4. Parse the response to extract the keys array (containing your query and page) along with clicks, impressions, CTR, and position fields.

5. Store results in a database or spreadsheet, tagging each pull with the retrieval date for historical comparison.

Pro Tips

The API aggregates position as a weighted average across all impressions, so a position of 3.7 means your page appeared at different positions across multiple searches. Always pull data with at least a 3-day lag, since Search Console data can be delayed by 2-3 days. Use the startRow parameter to paginate through large datasets that exceed the 25,000-row limit per request.

2. Build Automated Position Tracking with the Custom Search JSON API

The Challenge It Solves

The Search Console API only shows data for properties you own. If you need to track competitor keyword positions, monitor how your brand appears for terms you're not yet ranking for, or check positions across domains you don't control, you need a different approach. The Custom Search JSON API lets you parse live SERP results programmatically for any keyword.

The Strategy Explained

Google's Programmable Search Engine (formerly Custom Search Engine) exposes a JSON API that returns structured search results. The free tier provides 100 queries per day. Paid tiers are available at $5 per 1,000 queries, up to 10,000 per day, as documented in Google's official API documentation. Each request returns up to 10 results, so position tracking requires parsing result order and matching URLs against your target domain.

To check keyword position, send a query to the API, receive the JSON response containing ordered results, and iterate through the items array to find where your target URL appears. If your URL isn't in the first 10 results, you can paginate using the start parameter to check positions 11-100. You can also leverage the Google Indexing API to ensure your pages are discoverable before running position checks.

Implementation Steps

1. Create a Programmable Search Engine at programmablesearchengine.google.com, configure it to search the entire web, and note your Search Engine ID (cx parameter).

2. Enable the Custom Search JSON API in your Google Cloud project and generate an API key.

3. Build a function that accepts a keyword and target domain, sends a request to https://www.googleapis.com/customsearch/v1 with your key, cx, and q parameters, and parses the items array for matching URLs.

4. Implement pagination logic to check positions beyond 10 by incrementing the start parameter (10, 20, 30...) up to your desired depth.

5. Log the keyword, found position, matching URL, and timestamp for each check.

Pro Tips

Be mindful of daily quota limits when running bulk checks. Prioritize your highest-value keywords and stagger requests to avoid hitting limits mid-run. Also note that results from the Programmable Search Engine may differ slightly from standard Google search results, so treat this data as directional rather than definitive for competitive analysis.

3. Structure API Queries to Track Position Changes Over Time

The Challenge It Solves

A single position data point tells you where you rank today. What actually drives decisions is the trend: are you climbing, holding steady, or dropping? Without a structured approach to storing and comparing position data over time, you're flying blind on ranking trajectory and can't detect drops quickly enough to respond.

The Strategy Explained

Effective position tracking over time requires two things: a consistent data schema and automated scheduling. Your schema should capture the keyword, page URL, position, impressions, clicks, date pulled, and the API source. By storing daily snapshots with these fields, you can calculate position deltas between any two dates, identify trending keywords, and set threshold alerts for significant drops. For a deeper dive into building a complete tracking workflow, explore our guide on how to track keyword rankings.

Schedule your API pulls using a cron job, cloud function, or workflow automation tool. The Search Console API supports pulling data by date range, so you can request the previous day's data each morning. Build a delta calculation into your pipeline that compares today's position against a rolling 7-day and 30-day average to distinguish noise from genuine ranking shifts.

Implementation Steps

1. Design a database table with columns for: keyword, page_url, position, impressions, clicks, ctr, date, device, country, and api_source.

2. Write a scheduled script that runs daily, pulls yesterday's Search Console data for all tracked properties, and inserts new rows without overwriting historical records.

3. Create a delta view or query that joins today's position data with data from 7 days ago and 30 days ago for each keyword-page combination.

4. Set alert thresholds: for example, flag any keyword where position has dropped by more than 5 places in 7 days or more than 10 places in 30 days.

5. Send automated alerts via email or Slack when thresholds are breached, including the keyword, current position, previous position, and the page URL.

Pro Tips

Avoid pulling data in real time. Search Console data has a 2-3 day processing lag, so schedule your pulls accordingly. For the Custom Search JSON API, run checks at consistent times of day to minimize variation caused by time-sensitive SERP features. Consistent timing produces cleaner trend lines.

4. Segment Keyword Position Data by Device, Country, and Search Type

The Challenge It Solves

Aggregate position data hides important discrepancies. A keyword averaging position 8 might rank at position 3 on desktop and position 15 on mobile. A page performing well in the US might be invisible in the UK. Without segmentation, you're optimizing for an average that doesn't represent any real user's experience.

The Strategy Explained

The Search Console API's dimension filters make segmentation straightforward. By adding device, country, and searchType as dimensions in your query, you can break down position data across every meaningful axis. This reveals hidden ranking discrepancies that aggregate reporting obscures. Mobile and desktop rankings can differ substantially for the same keyword, particularly for pages with poor Core Web Vitals on mobile or content that isn't optimized for smaller screens.

Country-level segmentation is especially valuable for brands with international presence. A single page might rank well in one market and poorly in another, pointing to localization gaps or hreflang implementation issues. Understanding how to target local SEO keywords can help you address these geographic discrepancies more effectively. Search type segmentation (web, image, video, news) helps you understand where your impressions are actually coming from.

Implementation Steps

1. Modify your API request body to include additional dimensions: set the dimensions array to ["query", "page", "device", "country"] for maximum granularity.

2. Run separate queries for each searchType value (WEB, IMAGE, VIDEO, NEWS) if you want to track position across different result types.

3. Store the device and country fields alongside your existing position data so you can filter and compare segments in your reporting layer.

4. Build comparison queries that show the position gap between mobile and desktop for your top 50 keywords, sorted by the size of the discrepancy.

5. Create country-specific views for your primary markets, filtering by ISO country code (e.g., "usa", "gbr", "deu") to monitor international performance separately.

Pro Tips

When you find significant mobile-desktop position gaps, cross-reference those pages against your Core Web Vitals report in Search Console. Poor mobile performance is often the root cause. For country segmentation, focus your analysis on markets where you have meaningful impression volume rather than trying to optimize for every country simultaneously.

5. Combine API Data with Google Sheets or Looker Studio for Reporting

The Challenge It Solves

Raw API data stored in a database is useful for automation but not for communication. Clients, stakeholders, and team members need visual dashboards that make ranking trends immediately clear. Building a reporting layer on top of your API data transforms position numbers into narratives that drive decisions.

The Strategy Explained

Both Google Sheets and Looker Studio connect natively to Search Console data, and each serves a different use case. Google Sheets is ideal for lightweight tracking, quick analysis, and sharing with clients who want editable reports. Looker Studio (formerly Data Studio) is better for polished, interactive dashboards with automated refresh and multi-source data blending.

For Sheets, the Google Sheets API lets you write position data directly from your tracking script into named ranges or tabs, enabling formula-based trend analysis and conditional formatting to highlight drops. For Looker Studio, you can connect directly to Search Console as a data source or use a BigQuery connector if you're storing data at scale. Pairing this reporting with a solid understanding of organic traffic in Google Analytics gives you a more complete performance picture. Either way, the goal is a live dashboard that updates automatically without manual exports.

Implementation Steps

1. For Google Sheets: use the Sheets API to append rows of position data to a tracking sheet daily, with columns matching your data schema.

2. Add a summary tab with QUERY or FILTER formulas that pull the latest position for each tracked keyword, with conditional formatting to flag drops.

3. For Looker Studio: connect your Search Console property directly as a data source, then build charts showing position over time for your top keywords.

4. Add device and country filter controls to your Looker Studio report so stakeholders can slice data without needing to modify the underlying queries.

5. Schedule automated email delivery of your Looker Studio report to clients or team members on a weekly cadence using the report's built-in scheduling feature.

Pro Tips

Keep client-facing reports focused on a small set of high-value keywords rather than dumping all data into the dashboard. A report showing 10 strategically important keywords with clear trend lines is more actionable than a table of 5,000 queries. Use sparklines in Sheets for compact trend visualization without taking up too much space.

6. Scale Multi-Site Keyword Monitoring with Batch API Requests

The Challenge It Solves

Agencies managing dozens of client properties face a compounding challenge: running individual API calls for each site quickly hits rate limits, increases latency, and creates maintenance overhead. Without a structured approach to batch processing and centralized storage, multi-site monitoring becomes a fragile collection of disconnected scripts.

The Strategy Explained

The Search Console API is documented at a rate limit of 1,200 queries per minute per project. For agencies with many properties, this means architecting a queue-based system rather than running sequential requests. The approach involves a central job queue that processes properties in parallel while respecting rate limits, writes results to a shared database tagged by property, and handles errors gracefully with retry logic.

Service account delegation is the key to clean multi-property access. Rather than managing separate credentials for each client, you grant a single service account access to all verified properties and use that account for all API calls. Before running position checks, make sure each client's site is properly indexed—our article on how to index a website in Google covers the essentials. Centralized storage with a property identifier field lets you query across all clients or drill into individual accounts from the same database.

Implementation Steps

1. Create a single Google Cloud project with one service account, and add that service account as a user to every Search Console property you manage.

2. Build a job queue (using a database table, Redis, or a task queue service) that stores pending API requests with property ID, date range, and status fields.

3. Write a worker process that pulls jobs from the queue, executes API requests with exponential backoff on rate limit errors (HTTP 429), and marks jobs complete on success.

4. Add a property_id column to your position data table so all results from all clients share a single schema but remain filterable by account.

5. Build a monitoring dashboard that shows the last successful pull time for each property, flagging any that haven't updated in more than 24 hours.

Pro Tips

Stagger your batch jobs across the day rather than running everything at midnight. This distributes API load, reduces the chance of hitting quota limits, and means your data is refreshed progressively rather than all at once. For very large agencies, consider creating separate Google Cloud projects for groups of clients to multiply your effective rate limits.

7. Extend Position Tracking Beyond Google to AI Search Platforms

The Challenge It Solves

In 2026, a growing share of discovery happens outside of Google entirely. Users are asking ChatGPT, Claude, and Perplexity for recommendations, comparisons, and category leaders. If your brand isn't mentioned in those responses, you're invisible to a meaningful and growing segment of your audience regardless of how well you rank on Google. Traditional rank tracking captures only part of the visibility picture.

The Strategy Explained

AI search visibility works differently from SERP ranking. There's no position 1 through 10. Instead, AI models either mention your brand in a response or they don't, and when they do mention it, the context and sentiment matter as much as the mention itself. Monitoring this requires a different methodology: systematically querying AI platforms with prompts relevant to your category and tracking whether and how your brand appears in the responses.

This is where Sight AI fits into a complete visibility stack. Rather than manually querying AI platforms and logging responses, Sight AI tracks brand mentions across ChatGPT, Claude, Perplexity, and other AI models automatically. It provides an AI Visibility Score with sentiment analysis and prompt tracking, so you can see not just whether you're mentioned but how you're described and in what context.

Combining Google API position data with AI visibility monitoring gives you a unified view of how audiences discover your brand across every major channel. When your Google rankings and AI visibility move in different directions, that's a signal worth investigating. If you suspect your content isn't performing well in AI results, our article on AI content not ranking in Google explores the overlap between these challenges.

Implementation Steps

1. Identify the 20-30 prompts most relevant to your category: questions your target audience would ask an AI assistant when looking for solutions you provide.

2. Set up systematic monitoring of those prompts across AI platforms, either manually on a scheduled basis or through a dedicated AI visibility tool.

3. Track three dimensions for each prompt response: whether your brand is mentioned, the sentiment of the mention (positive, neutral, or negative), and the context in which it appears.

4. Compare your AI visibility trends against your Google position trends to identify gaps where you rank well on Google but aren't appearing in AI responses, or vice versa.

5. Use content gaps identified through AI monitoring to inform your SEO content strategy, targeting topics where AI models reference competitors but not your brand. A strong SEO keywords strategy ensures you're targeting the right terms across both traditional and AI search.

Pro Tips

AI visibility and Google rankings are increasingly influenced by the same underlying signals: authoritative content, strong backlink profiles, and consistent brand mentions across the web. Improving your content depth and topical authority tends to lift both simultaneously. Treat AI visibility monitoring as a leading indicator: if AI models start mentioning you more frequently in your category, organic search visibility often follows.

Putting It All Together: Your Keyword Position Tracking Roadmap

The seven strategies in this guide build on each other progressively, from foundational API setup to sophisticated multi-site monitoring and AI visibility tracking. If you're starting from scratch, here's the implementation order that makes the most sense.

Step 1: Search Console API access. Get authenticated, pull your first dataset, and verify you're receiving accurate position data for your verified properties. This is your data foundation.

Step 2: Automate daily pulls and storage. Build the scheduling and database infrastructure that captures daily snapshots and calculates position deltas. Without this, everything else is manual.

Step 3: Build segmented reporting. Add device, country, and search type dimensions to your queries, then connect the output to Sheets or Looker Studio for stakeholder-ready dashboards.

Step 4: Scale across properties. If you manage multiple sites, implement the batch processing architecture with centralized storage and rate limit management before your client roster grows further.

Step 5: Extend to AI search visibility. Once your Google tracking infrastructure is solid, layer in AI platform monitoring to capture the full spectrum of how your audience discovers your brand.

The most effective SEO teams in 2026 don't treat Google rankings and AI visibility as separate disciplines. They monitor both, look for correlations and divergences, and use insights from each to inform the other. A brand that ranks well on Google but doesn't appear in AI responses has a content gap worth addressing. A brand mentioned frequently by AI models but not ranking on Google has an SEO opportunity hiding in plain sight.

Stop guessing how AI models like ChatGPT and Claude talk about your brand. Start tracking your AI visibility today and see exactly where your brand appears across top AI platforms, so you can combine that intelligence with your Google API position data for a complete picture of your organic presence.

Start your 7‑day free trial

Ready to grow your organic traffic?

Start publishing content that ranks on Google and gets recommended by AI. Fully automated.