Web Scraping: Learn how to Use Tools and Techniques

Web scraping

Welcome, fellow developers and data enthusiasts! Today, we’re going to embark on an exciting journey into the world of web scraping. So, let’s dive in!

What is Web Scraping?

Web scraping, in its simplest form, is the process of extracting data from websites. It’s like a virtual miner, digging through layers of content to unearth the valuable nuggets of information hidden within. This can range from grabbing product prices from an e-commerce site, to extracting social media posts for sentiment analysis, or even pulling metadata from a webpage for SEO purposes.

Web scraping is a powerful tool in our digital age. It’s used in a wide array of applications, from data analysis and machine learning, to content aggregation and competitive analysis. It’s a skill that, once mastered, can open up a vast universe of data that was previously inaccessible.

Understanding HTML and CSS

Welcome back, fellow data miners! Now that we’ve covered the what and why of web scraping, let’s move on to the how. To effectively scrape data from a website, we need to understand the building blocks of that website: HTML and CSS. Don’t worry if you’re not a web developer – we’ll break it down in simple, digestible terms. Let’s get started!

Basics of HTML

HTML, or HyperText Markup Language, is the backbone of any website. It’s what structures the content on the page and tells your browser what to display. Think of it as the skeleton of a website.

HTML is composed of elements called tags. Each tag represents a different type of content. For example, <p> is a paragraph tag, <h1> is a heading tag, and <a> is a link tag. Here’s a simple HTML document:

<!DOCTYPE html>
    <title>My First Web Page</title>
    <h1>Welcome to My Web Page!</h1>
    <p>This is a paragraph of text.</p>
    <a href="https://www.example.com">Click me!</a>

In this example, <html> is the root element that contains all other elements. The <head> element contains meta-information about the document, like its title. The <body> element contains the main content of the web page.

Basics of CSS

While HTML provides the structure, CSS (Cascading Style Sheets) provides the style. It’s what makes a website look good. Think of it as the skin and clothes that cover the skeleton.

CSS rules are made up of selectors and declarations. The selector determines which HTML elements the rule applies to, and the declarations specify what style should be applied. Here’s a simple CSS rule:

p {
    color: blue;
    font-size: 16px;

This rule applies to all <p> (paragraph) elements. It sets the text color to blue and the font size to 16 pixels.

Introduction to the Document Object Model (DOM)

Now that we’ve covered HTML and CSS, let’s talk about how they interact with each other and with your browser. This is where the Document Object Model (DOM) comes in.

The DOM is a programming interface for HTML and XML documents. It represents the structure of a document as a tree, where each node is an object representing a part of the document. This tree-like structure is called the “DOM tree”, and the objects are “nodes”.

Here’s a simple representation of a DOM tree for our earlier HTML example:

└── html
    ├── head
    │   └── title
    └── body
        ├── h1
        ├── p
        └── a

When you’re scraping a website, you’re essentially navigating this DOM tree and extracting the data you need. Understanding the DOM is crucial to effective web scraping.

In the next section, we’ll start looking at some tools we can use to scrape data from websites. But for now, take some time to familiarize yourself with HTML, CSS, and the DOM. They’re the foundation upon which all web scraping is built.

Introduction to Web Scraping Tools

Hello again, data enthusiasts! Now that we’ve got a solid understanding of HTML, CSS, and the DOM, it’s time to introduce the real stars of the show: web scraping tools. These powerful libraries and frameworks are what will allow us to extract data from websites with ease. Let’s dive in!

Overview of Web Scraping Libraries and Frameworks

There’s a wide array of web scraping tools out there, each with its own strengths and weaknesses. Here are a few of the most popular ones:


BeautifulSoup is a Python library that’s great for beginners. It’s designed for pulling data out of HTML and XML files, making it perfect for web scraping. It provides Pythonic idioms for iterating, searching, and modifying the parse tree.

Here’s a simple example of how to use BeautifulSoup to extract all the links from a webpage:

from bs4 import BeautifulSoup
import requests

response = requests.get('https://www.example.com')
soup = BeautifulSoup(response.text, 'html.parser')

for link in soup.find_all('a'):

To install BeautifulSoup with Python on terminal, use the following command:

pip install beautifulsoup4 requests


Scrapy is another Python library, but it’s much more powerful and flexible than BeautifulSoup. It’s an open-source web crawling framework that allows you to write spiders to crawl websites and extract structured data from them. It’s a bit more complex to use than BeautifulSoup, but it’s also much more powerful.

Here’s a basic Scrapy spider that does the same thing as the BeautifulSoup example above:

import scrapy

class ExampleSpider(scrapy.Spider):
    name = 'example'
    start_urls = ['https://www.example.com']

    def parse(self, response):
        for link in response.css('a::attr(href)').getall():


Puppeteer is a Node.js library that provides a high-level API to control Chrome or Chromium over the DevTools Protocol. It’s perfect for scraping dynamic websites that rely heavily on JavaScript. Puppeteer can generate screenshots and PDFs of pages, crawl a Single-Page Application (SPA), and more.

Here’s a simple Puppeteer script to extract all the links from a webpage:

const puppeteer = require('puppeteer');

(async () => {
    const browser = await puppeteer.launch();
    const page = await browser.newPage();
    await page.goto('https://www.example.com');

    const links = await page.$$eval('a', as => as.map(a => a.href));

    await browser.close();

If you plan to use Scrapy or Puppeteer, you can install them with the following terminal commands:

pip install Scrapy

For Puppeteer, you’ll first need to install Node.js and npm, which you can download from the official Node.js website. Once you’ve installed Node.js and npm, you can install Puppeteer with the following command:

npm install puppeteer

Choosing the Right Tool for the Job

When it comes to choosing a web scraping tool, there’s no one-size-fits-all answer. The best tool for the job depends on the job itself.

If you’re just getting started with web scraping, or if you’re working with a simple, static website, BeautifulSoup is a great choice. It’s easy to use and has plenty of functionality for basic web scraping tasks.

If you’re dealing with a more complex website, or if you need to scrape a large amount of data quickly, Scrapy might be the better choice. It’s more powerful than BeautifulSoup and has built-in functionality for handling things like concurrency, throttling, and retrying failed requests.

Finally, if you’re working with a dynamic website that relies heavily on JavaScript, Puppeteer is the way to go. It’s the only tool of the three that can handle JavaScript out of the box, and it provides a lot of powerful features for interacting with websites in a browser-like environment.

Your First Web Scraping Project: Simple Static Website

Welcome back, data enthusiasts! Now that we’ve set up our environment, it’s time to start our first web scraping project. We’ll be scraping a simple static website for this tutorial. Let’s get started!

Choosing a Website to Scrape

The first step in any web scraping project is choosing a website to scrape. For this tutorial, let’s use a simple website: http://books.toscrape.com/. This website is a safe playground for web scraping as it’s designed for practicing web scraping.

Inspecting the Website’s Structure

Before we start writing code, we need to understand the structure of the website. Open the website in your browser and inspect the elements. You can do this by right-clicking on the webpage and selecting “Inspect” or “Inspect Element” from the context menu.

You’ll see that each book is contained in an article tag with the class product_pod. The book title is in an h3 tag, and the price is in a p tag with the class price_color.

Writing Code to Extract Data

Now that we understand the structure of the website, we can start writing our code. We’ll use Python with the BeautifulSoup and requests libraries for this tutorial.

Here’s a simple script that extracts the titles and prices of all the books on the first page:

import requests
from bs4 import BeautifulSoup

response = requests.get('http://books.toscrape.com/')
soup = BeautifulSoup(response.text, 'html.parser')

for book in soup.find_all('article', class_='product_pod'):
    title = book.h3.a['title']
    price = book.find('p', class_='price_color').text
    print(f'{title}: {price}')

This script sends a GET request to the website, parses the HTML response, and then loops through all the article tags with the class product_pod. For each book, it extracts the title and price and prints them.

Saving Data to a File

Printing the data to the console is fine for a quick check, but usually, you’ll want to save the data to a file for further analysis. Here’s how to modify the script to save the data to a CSV file:

import csv
import requests
from bs4 import BeautifulSoup

response = requests.get('http://books.toscrape.com/')
soup = BeautifulSoup(response.text, 'html.parser')

with open('books.csv', 'w', newline='') as f:
    writer = csv.writer(f)
    writer.writerow(['Title', 'Price'])

    for book in soup.find_all('article', class_='product_pod'):
        title = book.h3.a['title']
        price = book.find('p', class_='price_color').text
        writer.writerow([title, price])

This script does the same thing as the previous one, but instead of printing the data, it writes it to a CSV file named books.csv.

Scraping Dynamic Websites

Hello again, data miners! We’ve successfully scraped a static website, but what happens when a website relies on JavaScript to load its content? That’s where things get a little more interesting. Let’s dive into the world of dynamic websites!

Introduction to JavaScript and Dynamic Content

Many modern websites use JavaScript to load or display content dynamically. This means the content of the webpage can change after the initial page load, without the need for reloading the entire page. This can pose a challenge for web scraping, as the content you’re after might not be in the HTML when you first load the page.

For example, a website might use JavaScript to load more items when you scroll down (infinite scroll), or to load content in response to user interactions, like clicking a button. If you try to scrape these websites using the techniques we’ve used so far, you might find that the data you’re after is missing.

Using Tools Like Selenium or Puppeteer to Scrape Dynamic Websites

To scrape dynamic websites, we need tools that can interact with JavaScript. Two popular choices are Selenium and Puppeteer.


Selenium is a powerful tool for controlling a web browser through the program. It’s primarily used for testing web applications, but it’s also a great tool for scraping dynamic websites. Selenium supports multiple programming languages including Python, Java, C#, and Ruby.

Here’s an example of how to use Selenium with Python to scrape a dynamic website:

from selenium import webdriver

driver = webdriver.Firefox()  # Or webdriver.Chrome(), depending on your browser

# Interact with the page, e.g., scroll down, click a button, etc.
# ...

# Now you can parse the HTML as usual
soup = BeautifulSoup(driver.page_source, 'html.parser')

# Don't forget to close the driver when you're done


Puppeteer is a Node.js library that provides a high-level API to control Chrome or Chromium over the DevTools Protocol. It can generate screenshots and PDFs of pages, crawl a Single-Page Application (SPA), and more.

Here’s a simple Puppeteer script to extract data from a dynamic website:

const puppeteer = require('puppeteer');

(async () => {
    const browser = await puppeteer.launch();
    const page = await browser.newPage();
    await page.goto('https://www.example.com');

    // Interact with the page, e.g., scroll down, click a button, etc.
    // ...

    // Now you can parse the HTML as usual
    const data = await page.evaluate(() => {
        // This function runs in the page context, so you can use the usual browser APIs here
        // ...


    await browser.close();

Scraping dynamic websites can be a bit more complex than static ones, but with the right tools, it’s definitely achievable.

Data Cleaning and Processing

Hello again, data enthusiasts! We’ve successfully scraped both static and dynamic websites, but our work isn’t done yet. Raw data from the web is often messy and unstructured, and it needs to be cleaned and processed before it can be used effectively. Let’s dive into the world of data cleaning and processing!

Introduction to Data Cleaning

Data cleaning, also known as data cleansing or data scrubbing, is the process of detecting and correcting (or removing) corrupt, inaccurate, or irrelevant parts of the data. It’s a crucial step in the data analysis process, as the quality of your data can greatly affect the outcome of your analysis.

Data cleaning can involve a wide range of tasks, depending on the nature of the data and the specific requirements of your project. Here are a few common data cleaning tasks:

  • Removing duplicates: Duplicate data can skew your analysis and lead to inaccurate results.
  • Handling missing data: Not all data is complete, and you’ll need to decide how to handle missing values. You might choose to ignore them, fill them in with a default value, or use a technique like interpolation or imputation to estimate the missing values.
  • Converting data types: Data from the web often comes as strings, but you might need to convert it to other data types (like integers, floats, or dates) for your analysis.
  • Parsing text data: You might need to extract useful information from text data. This could involve tasks like splitting strings, removing punctuation or whitespace, or extracting dates or numbers.

Using Libraries Like Pandas for Data Processing

Python has a number of powerful libraries for data cleaning and processing. One of the most popular is pandas, which provides data structures and functions needed to manipulate structured data.

Here’s an example of how to use pandas to clean and process the data we scraped earlier:

import pandas as pd

# Load the data into a pandas DataFrame
df = pd.read_csv('books.csv')

# Remove duplicates
df = df.drop_duplicates()

# Handle missing data
df = df.dropna()  # or df.fillna(value), depending on your needs

# Convert price to a numeric type
df['Price'] = df['Price'].str.replace('£', '').astype(float)

# Save the cleaned data to a new CSV file
df.to_csv('books_clean.csv', index=False)

In this example, we load the data into a pandas DataFrame, remove duplicates, drop rows with missing values, convert the price to a numeric type, and save the cleaned data to a new CSV file.

Data cleaning and processing is a broad and complex topic, and there’s a lot more to it than we can cover in this section. However, with the basics under your belt and a tool like pandas at your disposal, you’re well-equipped to tackle most data cleaning tasks you’ll encounter.

Respecting Robots.txt and Avoiding Bans

Hello again, data enthusiasts! As we continue our web scraping journey, it’s important to remember that not all data is free for the taking. We need to respect the rules set by website owners, and we need to be mindful of our behavior to avoid getting banned. Let’s dive into the world of robots.txt and learn some techniques to avoid bans!

Understanding the Robots.txt File

The robots.txt file is a file at the root of a website that tells web crawlers which parts of the site the owners don’t want them to visit. It’s not a legally binding contract, but more of a “gentlemen’s agreement” that respectable web crawlers follow.

You can view a website’s robots.txt file by appending /robots.txt to the site’s root URL. For example, http://www.example.com/robots.txt.

Here’s an example of what a robots.txt file might look like:

User-agent: *
Disallow: /private
Disallow: /tmp

In this example, the User-agent: * line means that the following rules apply to all web crawlers. The Disallow lines list the paths that the web crawlers should not visit.

Avoid Being Banned by Websites

Even if you respect the robots.txt file, you might still get banned if you send too many requests too quickly, as this can overload the server. Here are some techniques to avoid this:

  • Rate limiting: Don’t send too many requests in a short period of time. Add delays between your requests to spread them out over time. The appropriate rate depends on the website and the server, but a common rule of thumb is one request per second.
  • Respecting Retry-After headers: If a server is overloaded, it might send a Retry-After header to tell you how long to wait before sending another request. Respect this header if you see it.

Advanced Topics

As we dive deeper into the world of web scraping, we encounter more complex scenarios. Some websites require login to access certain data, others provide APIs that can be a more efficient way to get data. Let’s tackle these advanced topics!

Scraping Websites with Login

Some websites require users to log in to access certain data. To scrape this data, your script will need to log in as well. This typically involves sending a POST request with the username and password to the login URL.

Here’s an example using Python’s requests library:

import requests
from bs4 import BeautifulSoup

# Start a session
session = requests.Session()

# Get the login page
login_url = 'https://www.example.com/login'
response = session.get(login_url)
soup = BeautifulSoup(response.text, 'html.parser')

# Find the CSRF token
csrf_token = soup.find('input', attrs={'name': 'csrf_token'})['value']

# Log in
data = {
    'username': 'my_username',
    'password': 'my_password',
    'csrf_token': csrf_token,
response = session.post(login_url, data=data)

# Now you can access the protected pages
response = session.get('https://www.example.com/protected_page')

In this example, we start a session, get the login page, find the CSRF token (a common security measure), and send a POST request with the username, password, and CSRF token to log in. After logging in, we can use the session to access protected pages.

Using APIs Instead of Scraping When Possible

Many websites provide APIs (Application Programming Interfaces) that allow you to access their data in a structured format. Using an API can be more efficient and reliable than scraping the website, as the data is usually more structured and the API is designed to handle large amounts of requests.

Before you start scraping a website, check if it provides an API that can serve your needs. The API documentation should provide information on how to send requests and what data you can access.


As we wrap up our journey into the world of web scraping, it’s essential to recap the pivotal points we’ve covered. From understanding the basics to diving deep into advanced scraping techniques, we’ve unlocked the potential of extracting valuable data from the web.

Web Scraping: A Recap

  • What is Web Scraping and How It Works: At its core, web scraping is a method to extract data from websites. It’s a crucial skill, especially in today’s data-driven world. Whether you’re learning web scraping with Python or another language, the principles remain consistent.
  • Tools and Techniques: There’s a plethora of web scraping tools available, both free and paid. From Beautiful Soup to Scrapy, choosing the best tool to scrape data from a website depends on your specific needs. Remember, while tools make the process easier, understanding the underlying scraping tips and tricks is invaluable.
  • Real-World Application: Applying what we’ve learned to real-world scenarios, like the project we discussed, is the true test of our scraping skills. It’s not just about how to scrape, but also about understanding when and why to scrape.
  • Ethical Considerations: Always respect robots.txt and ensure you’re not infringing on any copyrights. The question of “Is web scraping legal” often arises, and while it’s generally legal, always ensure you’re scraping ethically.

Further Resources for Learning

For those eager to dive deeper, there are numerous resources available. Whether you’re looking to learn web scraping with Python from scratch or seeking advanced tutorials, the web is filled with knowledge. Some recommended resources include:


Reach out if you want to join me and write articles with the nerds 🙂

© 2024 · Nerd Level Tech


Social Media

Stay connected on social media