How to scrape Medium articles using Python

Comments: 0

Extracting articles from Medium can be of utmost importance for purposes such as content evaluation, data collecting, or monitoring authors and their works. In this tutorial, we will learn how to scrape medium – an article website for writers, using python programming language. We will discuss how data such as article title, the name of the author, the name of the publication, and the text body itself can be extracted from a given web URL of a Medium article.

Requirements

For this tutorial, we’ll be scraping this article on Medium: “9 Python Built-in Decorators That Optimize Your Code Significantly”.

Before you start, install the following libraries:

  • Requests: To send HTTP requests to Medium.
  • lxml: For parsing HTML content.
  • Pandas: To save the data to a CSV file.

Install them with the following commands:


pip install requests
pip install lxml 
pip install pandas

Understanding the importance of headers and proxies

Medium uses bot detection techniques to prevent unauthorized scraping. Proper headers and proxies are crucial for avoiding bot detection and for responsible scraping.

Headers: These simulate a request as if it’s coming from a real browser. They include information like the browser type, accepted content types, and caching behavior.


headers = {
    'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
    'accept-language': 'en-IN,en;q=0.9',
    'cache-control': 'no-cache',
    'dnt': '1',
    'pragma': 'no-cache',
    'priority': 'u=0, i',
    'sec-ch-ua': '"Google Chrome";v="129", "Not=A?Brand";v="8", "Chromium";v="129"',
    'sec-ch-ua-mobile': '?0',
    'sec-ch-ua-platform': '"Linux"',
    'sec-fetch-dest': 'document',
    'sec-fetch-mode': 'navigate',
    'sec-fetch-site': 'none',
    'sec-fetch-user': '?1',
    'upgrade-insecure-requests': '1',
    'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36',
}

Proxies: Proxies can mask your IP address, rotating it periodically to make it less likely for the requests to get blocked by Medium. Here is an example of using it with IP address authentication:


proxies = {
    'http': 'IP:PORT',
    'https': 'IP:PORT'
}

response = requests.get(
 'https://medium.com/techtofreedom/9-python-built-in-decorators-that-optimize-your-code-significantly-bc3f661e9017',
    headers=headers,
    proxies=proxies
)

Sending a request to Medium

Here's how to set up the headers and send a request to the article URL:


import requests

# Headers to simulate a real browser request
headers = {
    'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
    'accept-language': 'en-IN,en;q=0.9',
    'cache-control': 'no-cache',
    'dnt': '1',
    'pragma': 'no-cache',
    'priority': 'u=0, i',
    'sec-ch-ua': '"Google Chrome";v="129", "Not=A?Brand";v="8", "Chromium";v="129"',
    'sec-ch-ua-mobile': '?0',
    'sec-ch-ua-platform': '"Linux"',
    'sec-fetch-dest': 'document',
    'sec-fetch-mode': 'navigate',
    'sec-fetch-site': 'none',
    'sec-fetch-user': '?1',
    'upgrade-insecure-requests': '1',
    'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36',
}

url = 'https://medium.com/techtofreedom/9-python-built-in-decorators-that-optimize-your-code-significantly-bc3f661e9017'
response = requests.get(url, headers=headers)

Extracting data

Once we have the page content, we can parse it and extract relevant information.

Parsing HTML Content

We'll use lxml to parse the HTML response and extract specific elements. Here’s how to do it:


from lxml.html import fromstring

parser = fromstring(response.text)

# Extract data
title = parser.xpath('//h1[@data-testid="storyTitle"]/text()')[0]
author = parser.xpath('//a[@data-testid="authorName"]/text()')[0]
publication_name = parser.xpath('//a[@data-testid="publicationName"]/p/text()')[0]
publication_date = parser.xpath('//span[@data-testid="storyPublishDate"]/text()')[0]
content = '\n '.join(parser.xpath('//div[@class="ci bh ga gb gc gd"]/p/text()'))
auth_followers = parser.xpath('//span[@class="pw-follower-count bf b bg z bk"]/a/text()')[0]
sub_title = parser.xpath('//h2[@id="1de6"]/text()')[0]

Now, we'll create a dictionary to hold all extracted data. This makes it easier to save to a CSV file.


# Save data in a dictionary
article_data = {
    'Title': title,
    'Author': author,
    'Publication': publication_name,
    'Date': publication_date,
    'Followers': auth_followers,
    'Subtitle': sub_title,
    'Content': content,
}

print(article_data)

Saving data to a CSV file

Finally, let’s save the data to a CSV file for further analysis or record-keeping.


import pandas as pd

# Convert dictionary to DataFrame and save as CSV
df = pd.DataFrame([article_data])
df.to_csv('medium_article_data.csv', index=False)
print("Data saved to medium_article_data.csv")

Full code

Here's the complete code for scraping the Medium article data:


import requests
from lxml.html import fromstring
import pandas as pd

# Headers to mimic a browser
headers = {
    'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
    'accept-language': 'en-IN,en;q=0.9',
    'cache-control': 'no-cache',
    'dnt': '1',
    'pragma': 'no-cache',
    'priority': 'u=0, i',
    'sec-ch-ua': '"Google Chrome";v="129", "Not=A?Brand";v="8", "Chromium";v="129"',
    'sec-ch-ua-mobile': '?0',
    'sec-ch-ua-platform': '"Linux"',
    'sec-fetch-dest': 'document',
    'sec-fetch-mode': 'navigate',
    'sec-fetch-site': 'none',
    'sec-fetch-user': '?1',
    'upgrade-insecure-requests': '1',
    'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36',
}


proxies = {
    'http': 'IP:PORT',
    'https': 'IP:PORT'
}

# Requesting the page
url = 'https://medium.com/techtofreedom/9-python-built-in-decorators-that-optimize-your-code-significantly-bc3f661e9017'
response = requests.get(url, headers=headers, proxies=proxies)

# Parsing the page
parser = fromstring(response.text)

# Extract data
title = parser.xpath('//h1[@data-testid="storyTitle"]/text()')[0]
author = parser.xpath('//a[@data-testid="authorName"]/text()')[0]
publication_name = parser.xpath('//a[@data-testid="publicationName"]/p/text()')[0]
publication_date = parser.xpath('//span[@data-testid="storyPublishDate"]/text()')[0]
content = '\n '.join(parser.xpath('//div[@class="ci bh ga gb gc gd"]/p/text()'))
auth_followers = parser.xpath('//span[@class="pw-follower-count bf b bg z bk"]/a/text()')[0]
sub_title = parser.xpath('//h2[@id="1de6"]/text()')[0]

# Saving data
article_data = {
    'Title': title,
    'Author': author,
    'Publication': publication_name,
    'Date': publication_date,
    'Followers': auth_followers,
    'Subtitle': sub_title,
    'Content': content,
}

# Save to CSV
df = pd.DataFrame([article_data])
df.to_csv('medium_article_data.csv', index=False)
print("Data saved to medium_article_data.csv")

Scraping content from Medium should be conducted responsibly. Excessive request load on servers can affect the service's performance, and scraping data without permission may violate the website's terms of use. Always check the robots.txt file and terms before scraping any website.

Comments:

0 comments