From d3eb5280b34abb550233cd81a3f3ba628d712085 Mon Sep 17 00:00:00 2001 From: Honza Javorek Date: Mon, 25 Aug 2025 21:03:18 +0200 Subject: [PATCH] style: do not use three dots in incomplete examples --- .../scraping_basics_javascript2/11_scraping_variants.md | 4 ---- .../webscraping/scraping_basics_javascript2/12_framework.md | 2 -- .../scraping_basics_python/11_scraping_variants.md | 4 ---- .../webscraping/scraping_basics_python/12_framework.md | 2 -- 4 files changed, 12 deletions(-) diff --git a/sources/academy/webscraping/scraping_basics_javascript2/11_scraping_variants.md b/sources/academy/webscraping/scraping_basics_javascript2/11_scraping_variants.md index 6cebba6587..414ac82f0e 100644 --- a/sources/academy/webscraping/scraping_basics_javascript2/11_scraping_variants.md +++ b/sources/academy/webscraping/scraping_basics_javascript2/11_scraping_variants.md @@ -72,8 +72,6 @@ These elements aren't visible to regular visitors. They're there just in case Ja Using our knowledge of Beautiful Soup, we can locate the options and extract the data we need: ```py -... - listing_url = "https://warehouse-theme-metal.myshopify.com/collections/sales" listing_soup = download(listing_url) @@ -89,8 +87,6 @@ for product in listing_soup.select(".product-item"): else: item["variant_name"] = None data.append(item) - -... ``` The CSS selector `.product-form__option.no-js` matches elements with both `product-form__option` and `no-js` classes. Then we're using the [descendant combinator](https://developer.mozilla.org/en-US/docs/Web/CSS/Descendant_combinator) to match all `option` elements somewhere inside the `.product-form__option.no-js` wrapper. diff --git a/sources/academy/webscraping/scraping_basics_javascript2/12_framework.md b/sources/academy/webscraping/scraping_basics_javascript2/12_framework.md index fe80fb5fc1..ae1abb53ff 100644 --- a/sources/academy/webscraping/scraping_basics_javascript2/12_framework.md +++ b/sources/academy/webscraping/scraping_basics_javascript2/12_framework.md @@ -534,7 +534,6 @@ If you export the dataset as JSON, it should look something like this: To scrape IMDb data, you'll need to construct a `Request` object with the appropriate search URL for each movie title. The following code snippet gives you an idea of how to do this: ```py -... from urllib.parse import quote_plus async def main(): @@ -550,7 +549,6 @@ async def main(): await context.add_requests(requests) ... -... ``` When navigating to the first search result, you might find it helpful to know that `context.enqueue_links()` accepts a `limit` keyword argument, letting you specify the max number of HTTP requests to enqueue. diff --git a/sources/academy/webscraping/scraping_basics_python/11_scraping_variants.md b/sources/academy/webscraping/scraping_basics_python/11_scraping_variants.md index 2d8b9e8226..7c799759ef 100644 --- a/sources/academy/webscraping/scraping_basics_python/11_scraping_variants.md +++ b/sources/academy/webscraping/scraping_basics_python/11_scraping_variants.md @@ -71,8 +71,6 @@ These elements aren't visible to regular visitors. They're there just in case Ja Using our knowledge of Beautiful Soup, we can locate the options and extract the data we need: ```py -... - listing_url = "https://warehouse-theme-metal.myshopify.com/collections/sales" listing_soup = download(listing_url) @@ -88,8 +86,6 @@ for product in listing_soup.select(".product-item"): else: item["variant_name"] = None data.append(item) - -... ``` The CSS selector `.product-form__option.no-js` matches elements with both `product-form__option` and `no-js` classes. Then we're using the [descendant combinator](https://developer.mozilla.org/en-US/docs/Web/CSS/Descendant_combinator) to match all `option` elements somewhere inside the `.product-form__option.no-js` wrapper. diff --git a/sources/academy/webscraping/scraping_basics_python/12_framework.md b/sources/academy/webscraping/scraping_basics_python/12_framework.md index c8b5f64685..63be4cf61a 100644 --- a/sources/academy/webscraping/scraping_basics_python/12_framework.md +++ b/sources/academy/webscraping/scraping_basics_python/12_framework.md @@ -533,7 +533,6 @@ If you export the dataset as JSON, it should look something like this: To scrape IMDb data, you'll need to construct a `Request` object with the appropriate search URL for each movie title. The following code snippet gives you an idea of how to do this: ```py -... from urllib.parse import quote_plus async def main(): @@ -549,7 +548,6 @@ async def main(): await context.add_requests(requests) ... -... ``` When navigating to the first search result, you might find it helpful to know that `context.enqueue_links()` accepts a `limit` keyword argument, letting you specify the max number of HTTP requests to enqueue.