How to Scrape Google Maps with Python and ProxyTee

How to Scrape Google Maps with Python and ProxyTee
Photo by henry perks / Unsplash

Google Maps is an invaluable resource for businesses, researchers, and developers looking to extract data such as business names, addresses, contact details, customer reviews, and more. However, manually collecting this information is time-consuming and inefficient. This is where web scraping comes in—an automated approach that enables users to retrieve large amounts of data quickly and efficiently.

However, scraping Google Maps presents unique challenges, including dynamic content loading, bot detection, and IP restrictions. By leveraging ProxyTee and Python-based tools like Selenium, you can overcome these obstacles and extract data at scale without getting blocked.

In this guide, we'll walk you through the process of building a Google Maps scraper using Python, Selenium, and ProxyTee's high-quality proxy services to ensure stable and anonymous scraping.


What Is a Google Maps Scraper?

A Google Maps scraper is a tool that automates data extraction from Google Maps. Rather than manually searching and copying information, a scraper programmatically navigates through Google Maps, retrieves relevant data, and stores it for analysis.

This is especially useful for:

  • Market research – Identifying competitors and analyzing industry trends.
  • Lead generation – Gathering business contact details for sales and marketing.
  • Local SEO optimization – Monitoring business listings and customer reviews.
  • Data analysis – Collecting structured business data for research.

What Data Can You Retrieve From Google Maps

A Google Maps scraper can extract the following details:

  • Business Name: The name of the business or point of interest.
  • Address: The physical location of the business.
  • Phone Number: The business contact number, if available.
  • Website: The URL of the business's official website.
  • Business Hours: Open and closing times.
  • Reviews: Customer feedback, including star ratings and textual reviews.
  • Ratings: Overall average star rating.
  • Photos: Images uploaded by businesses or customers.

To collect this data efficiently while avoiding detection, you need residential proxies and automated scraping tools like Selenium.


Step-by-Step Instructions to Scrape Google Maps With Python

Here’s how to build a Python-based Google Maps scraper. We’ll be utilizing Python, Selenium (for web page interaction), and ProxyTee’s Unlimited Residential Proxies for enhanced anonymity and stability.

1️ Set Up Your Project

Before coding, confirm that you have Python 3+ installed on your computer. Once done, setup a directory for this project. Within this directory, initiate a virtual environment, keeping the packages for the current project isolated from other project.

mkdir google-maps-scraper
cd google-maps-scraper
python -m venv env

Once your environment is ready, install your preferred IDE for this project like Visual Studio Code with the Python extension.

Inside the project folder, create a file named `scraper.py`. Activate your virtual environment, and then let's continue

2️⃣ Choose the Scraping Library

Given the dynamic and interactive nature of Google Maps, using browser automation tools like Selenium is your best bet. Selenium will enable you to interact with the web page programmatically. This is way more practical than dealing with static page requests, especially for a dynamic site.

pip install selenium

3️⃣ Install and Configure the Scraping Library

To scrape Google Maps without getting blocked, configure ProxyTee proxies. These proxies allow your requests to come from different IP addresses, making scraping more stable.

Create a scraper.py file and add the following code:

from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options

# configure ProxyTee credentials
PROXY_HOST = "YOUR_MYPROXY_PROXY_HOST"
PROXY_PORT = YOUR_MYPROXY_PROXY_PORT
PROXY_USER = "YOUR_MYPROXY_USER"
PROXY_PASS = "YOUR_MYPROXY_PASS"

# to launch Chrome in headless mode
options = Options()
options.add_argument("--headless")  # comment it while developing

# Configure proxy in Selenium options
options.add_argument(f'--proxy-server=http://{PROXY_USER}:{PROXY_PASS}@{PROXY_HOST}:{PROXY_PORT}')


# create a Chrome web driver instance with the specified options
driver = webdriver.Chrome(
    service=Service(),
    options=options
)

# connect to the Google Maps home page
driver.get("https://www.google.com/maps")

# your scraping logic


# close the web browser
driver.quit()

Remember to substitute in your ProxyTee proxy settings to ensure each request is routed through a residential IP, keeping you from getting banned or having a limit. ProxyTee's Unlimited Residential Proxies ensures that you will not experience bandwidth throttling or additional charges, allowing you to focus solely on data extraction without concerns about exceeding usage limits.

4️⃣ Connect to the Target Page

Now use the selenium driver’s `get` function to navigate to the Google Maps website. The `scraper.py` should have the following content at the moment:

from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options

# configure ProxyTee credentials
PROXY_HOST = "YOUR_MYPROXY_PROXY_HOST"
PROXY_PORT = YOUR_MYPROXY_PROXY_PORT
PROXY_USER = "YOUR_MYPROXY_USER"
PROXY_PASS = "YOUR_MYPROXY_PASS"

# to launch Chrome in headless mode
options = Options()
options.add_argument("--headless")  # comment it while developing

# Configure proxy in Selenium options
options.add_argument(f'--proxy-server=http://{PROXY_USER}:{PROXY_PASS}@{PROXY_HOST}:{PROXY_PORT}')

# create a Chrome web driver instance with the specified options
driver = webdriver.Chrome(
    service=Service(),
    options=options
)

# connect to the Google Maps home page
driver.get("https://www.google.com/maps")


# scraping logic...

# close the web browser
driver.quit()

If you are located in the EU region, Google will need to prompt you to choose cookie policies. By default, the browser redirects you until you make your choice. So before proceeding further, you must click the 'Accept All' button by inspecting its html elements:

from selenium.webdriver.common.by import By
from selenium.common.exceptions import NoSuchElementException

try:
    # select the "Accept all" button from the GDPR cookie option page
    accept_button = driver.find_element(By.CSS_SELECTOR, "[aria-label=\"Accept all\"]")
    # click it
    accept_button.click()
except NoSuchElementException:
    print("No GDPR requirements")

The script will now either press the 'Accept all' button to get to your requested target website, or skip it and proceed if the element is not available on your machine.

6️⃣ Submit the Search Form

Now we proceed to filling in the search input by locating its field first, with the `WebDriverWait` command, which will also ensure that all element has been rendered to avoid errors

from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

search_input = WebDriverWait(driver, 5).until(
    EC.presence_of_element_located((By.CSS_SELECTOR, "#searchboxinput"))
)
search_query = "italian restaurants"
search_input.send_keys(search_query)

# submit the search form
search_button = driver.find_element(By.CSS_SELECTOR, "button[aria-label=\"Search\"]")
search_button.click()

In this case, “italian restaurants” is the search query, but you can search for any other term. After entering the text, we must press the 'Search' button to navigate into the actual page of the result.

7️⃣ Select the Google Maps Items

With the search result available in front of us, we proceed with scraping the data. Because the data will be a collection of data, a good approach is to make it as an array or list:

items = []

maps_items = WebDriverWait(driver, 10).until(
    EC.presence_of_all_elements_located((By.XPATH, '//div[@role="feed"]//div[contains(@jsaction, "mouseover:pane")]'))
)

In this case, we are selecting the main `div` container of our result. Each container includes an element of a listing of all business results. By using this, you can select multiple items and iterate it for next scraping session

8️⃣ Scrape the Google Maps Items

Here is where you use looping to traverse to each map element. Then we pick out each detail for every single map container result:

import re

for maps_item in maps_items:
    link_element = maps_item.find_element(By.CSS_SELECTOR, "a[jsaction][jslog]")
    url = link_element.get_attribute("href")

    title_element = maps_item.find_element(By.CSS_SELECTOR, "div.fontHeadlineSmall")
    title = title_element.text

    reviews_element = maps_item.find_element(By.CSS_SELECTOR, "span[role=\"img\"]")
    reviews_string = reviews_element.get_attribute("aria-label")
    # define a regular expression pattern to extract the stars and reviews count
    reviews_string_pattern = r"(\d+\.\d+) stars (\d+[,]*\d+) Reviews"
    # use re.match to find the matching groups
    reviews_string_match = re.match(reviews_string_pattern, reviews_string)
    reviews_stars = None
    reviews_count = None
    # if a match is found, extract the data
    if reviews_string_match:
        # convert stars to float
        reviews_stars = float(reviews_string_match.group(1))
        # convert reviews count to integer
        reviews_count = int(reviews_string_match.group(2).replace(",", ""))

    info_div = maps_item.find_element(By.CSS_SELECTOR, ".fontBodyMedium")

    # scrape the price, if present
    try:
        price_element = info_div.find_element(By.XPATH, ".//*[@aria-label[contains(., 'Price')]]")
        price = price_element.text
    except NoSuchElementException:
        price = None

    info = []
    # select all <span> elements with no attributes or the @style attribute
    # and descendant of a <span>
    span_elements = info_div.find_elements(By.XPATH, ".//span[not(@*) or @style][not(descendant::span)]")
    for span_element in span_elements:
      info.append(span_element.text.replace("⋅", "").strip())
    # to remove any duplicate info and empty strings
    info = list(filter(None, list(set(info))))

    img_element = WebDriverWait(driver, 5).until(
      EC.presence_of_element_located((By.CSS_SELECTOR, "img[decoding=\"async\"][aria-hidden=\"true\"]"))
    )
    image = img_element.get_attribute("src")

    # select the tag <div> element and extract data from it
    tags_div = maps_item.find_elements(By.CSS_SELECTOR, ".fontBodyMedium")[-1]
    tags = []
    tag_elements = tags_div.find_elements(By.CSS_SELECTOR, "span[style]")
    for tag_element in tag_elements:
        tags.append(tag_element.text)

    # populate a new item with the scraped data
    item = {
      "url": url,
      "image": image,
      "title": title,
      "reviews": {
        "stars": reviews_stars,
        "count": reviews_count
      },
      "price": price,
      "info": info,
      "tags": tags
    }
    # add it to the list of scraped data
    items.append(item)

As you see, by accessing those elements, we can get most information by traversing HTML elements by its selector and tag name.

9️⃣ Collect the Scraped Data

In the looping process above, each map result is extracted as its single object item and inserted into a collection of `items` that is declared above.

🔟 Export to CSV

With all the information already extracted, it's better to put this result into a readable CSV format. This process will flatten all those elements and make a record from each scraped data:

import csv
# output CSV file path
output_file = "items.csv"
# flatten and export to CSV
with open(output_file, mode="w", newline="", encoding="utf-8") as csv_file:
    # define the CSV field names
    fieldnames = ["url", "image", "title", "reviews_stars", "reviews_count", "price", "info", "tags"]
    writer = csv.DictWriter(csv_file, fieldnames=fieldnames)
    # write the header
    writer.writeheader()
    # write each item, flattening info and tags
    for item in items:
        writer.writerow({
            "url": item["url"],
            "image": item["image"],
            "title": item["title"],
            "reviews_stars": item["reviews"]["stars"],
            "reviews_count": item["reviews"]["count"],
            "price": item["price"],
            "info": "; ".join(item["info"]),
            "tags": "; ".join(item["tags"])
        })

The flattened CSV file will help with data processing and analysis as it transforms each collection from the JSON object and records it into CSV fields.

Put It All Together

And here’s all code for your scraper combined:

from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import re
import csv

# configure ProxyTee credentials
PROXY_HOST = "YOUR_MYPROXY_PROXY_HOST"
PROXY_PORT = YOUR_MYPROXY_PROXY_PORT
PROXY_USER = "YOUR_MYPROXY_USER"
PROXY_PASS = "YOUR_MYPROXY_PASS"


# to launch Chrome in headless mode
options = Options()
options.add_argument("--headless")  # comment it while developing

# Configure proxy in Selenium options
options.add_argument(f'--proxy-server=http://{PROXY_USER}:{PROXY_PASS}@{PROXY_HOST}:{PROXY_PORT}')

# create a Chrome web driver instance with the
# specified options
driver = webdriver.Chrome(
    service=Service(),
    options=options
)

# connect to the Google Maps home page
driver.get("https://www.google.com/maps")

# to deal with the option GDPR options
try:
     # select the "Accept all" button from the GDPR cookie option page
    accept_button = driver.find_element(By.CSS_SELECTOR, "[aria-label=\"Accept all\"]")
    # click it
    accept_button.click()
except NoSuchElementException:
    print("No GDPR requirenments")

# select the search input and fill it in
search_input = WebDriverWait(driver, 5).until(
    EC.presence_of_element_located((By.CSS_SELECTOR, "#searchboxinput"))
)
search_query = "italian restaurants"
search_input.send_keys(search_query)

# submit the search form
search_button = driver.find_element(By.CSS_SELECTOR, "button[aria-label=\"Search\"]")
search_button.click()

# where to store the scraped data
items = []

# select the Google Maps items
maps_items = WebDriverWait(driver, 10).until(
    EC.presence_of_all_elements_located((By.XPATH, '//div[@role="feed"]//div[contains(@jsaction, "mouseover:pane")]'))
)

# iterate over the Google Maps items and
# perform the scraping logic
for maps_item in maps_items:
    link_element = maps_item.find_element(By.CSS_SELECTOR, "a[jsaction][jslog]")
    url = link_element.get_attribute("href")

    title_element = maps_item.find_element(By.CSS_SELECTOR, "div.fontHeadlineSmall")
    title = title_element.text

    reviews_element = maps_item.find_element(By.CSS_SELECTOR, "span[role=\"img\"]")
    reviews_string = reviews_element.get_attribute("aria-label")
    # define a regular expression pattern to extract the stars and reviews count
    reviews_string_pattern = r"(\d+\.\d+) stars (\d+[,]*\d+) Reviews"
    # use re.match to find the matching groups
    reviews_string_match = re.match(reviews_string_pattern, reviews_string)
    reviews_stars = None
    reviews_count = None
    # if a match is found, extract the data
    if reviews_string_match:
        # convert stars to float
        reviews_stars = float(reviews_string_match.group(1))
        # convert reviews count to integer
        reviews_count = int(reviews_string_match.group(2).replace(",", ""))

    # select the Google Maps item <div> with most info
    # and extract data from it
    info_div = maps_item.find_element(By.CSS_SELECTOR, ".fontBodyMedium")
    # scrape the price, if present
    try:
        price_element = info_div.find_element(By.XPATH, ".//*[@aria-label[contains(., 'Price')]]")
        price = price_element.text
    except NoSuchElementException:
        price = None

    info = []
    # select all <span> elements with no attributes or the @style attribute
    # and descendant of a <span>
    span_elements = info_div.find_elements(By.XPATH, ".//span[not(@*) or @style][not(descendant::span)]")
    for span_element in span_elements:
      info.append(span_element.text.replace("⋅", "").strip())
    # to remove any duplicate info and empty strings
    info = list(filter(None, list(set(info))))

    img_element = WebDriverWait(driver, 5).until(
        EC.presence_of_element_located((By.CSS_SELECTOR, "img[decoding=\"async\"][aria-hidden=\"true\"]"))
    )
    image = img_element.get_attribute("src")

    # select the tag <div> element and extract data from it
    tags_div = maps_item.find_elements(By.CSS_SELECTOR, ".fontBodyMedium")[-1]
    tags = []
    tag_elements = tags_div.find_elements(By.CSS_SELECTOR, "span[style]")
    for tag_element in tag_elements:
        tags.append(tag_element.text)
    # populate a new item with the scraped data
    item = {
      "url": url,
      "image": image,
      "title": title,
      "reviews": {
        "stars": reviews_stars,
        "count": reviews_count
      },
      "price": price,
      "info": info,
      "tags": tags
    }
    # add it to the list of scraped data
    items.append(item)

# output CSV file path
output_file = "items.csv"
# flatten and export to CSV
with open(output_file, mode="w", newline="", encoding="utf-8") as csv_file:
    # define the CSV field names
    fieldnames = ["url", "image", "title", "reviews_stars", "reviews_count", "price", "info", "tags"]
    writer = csv.DictWriter(csv_file, fieldnames=fieldnames)
    # write the header
    writer.writeheader()
    # write each item, flattening info and tags
    for item in items:
        writer.writerow({
            "url": item["url"],
            "image": item["image"],
            "title": item["title"],
            "reviews_stars": item["reviews"]["stars"],
            "reviews_count": item["reviews"]["count"],
            "price": item["price"],
            "info": "; ".join(item["info"]),
            "tags": "; ".join(item["tags"])
        })

# close the web browser
driver.quit()

Run your scraper, and once done, a CSV containing the extracted data from Google Maps. You have a successful Google Maps Scraper!


Why Use ProxyTee for Google Maps Scraping?

Scraping Google Maps requires stable and anonymous connections to avoid bans. ProxyTee offers the best solution with:


Conclusion

Scraping Google Maps is a powerful way to extract valuable business data at scale. With Selenium for automation and ProxyTee for secure, anonymous connections, you can scrape data efficiently while avoiding bans.

By following this guide, you can now build a robust Google Maps scraper that collects structured business information for analysis, marketing, and research.

🚀 Ready to scale your web scraping operations? Get started with ProxyTee today!