Skip to content
This repository has been archived by the owner on Aug 7, 2023. It is now read-only.

Tech: ajout d’un vérificateur de liens basique #143

Merged
merged 2 commits into from
Jun 2, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
8 changes: 4 additions & 4 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,14 +22,14 @@ jobs:
run: |
make install

- name: Run tests
run: |
make test

- name: Build
run: |
make build

- name: Run tests
run: |
make test

- uses: actions/upload-artifact@v2
with:
name: mesconseilscovid-${{ env.GITHUB_REF_SLUG_URL }}-${{ env.GITHUB_SHA_SHORT }}
Expand Down
3 changes: 2 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,9 @@ install: ## Install Python and JS dependencies.
python3 -m pip install -r requirements.txt
npm install

test: ## Run JS unit tests.
test: ## Run JS unit tests + links checker.
npm run-script test
python3 test.py links

lint: ## Run ESLint.
npm run-script lint
Expand Down
1 change: 1 addition & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
httpx==0.13.3
Jinja2==2.11.2
livereload==2.6.1
MarkupSafe==1.1.1
Expand Down
47 changes: 47 additions & 0 deletions test.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
#!/usr/bin/env python3
import fnmatch
import os
from html.parser import HTMLParser
from pathlib import Path
from time import perf_counter
from http import HTTPStatus

import httpx
from minicli import cli, run, wrap

HERE = Path(__file__).parent


class LinkExtractor(HTMLParser):
def reset(self):
HTMLParser.reset(self)
self.links = set()

def handle_starttag(self, tag, attrs):
if tag == "a":
attrs = dict(attrs)
if attrs["href"].startswith("http"):
self.links.update([attrs["href"]])


@cli
def links():
parser = LinkExtractor()
content = open(HERE / "src" / "index.html").read()
parser.feed(content)
for link in parser.links:
response = httpx.get(link)
if response.status_code != HTTPStatus.OK:
raise Exception(f"{link} is broken! ({response.status_code})")


@wrap
def perf_wrapper():
start = perf_counter()
yield
elapsed = perf_counter() - start
print(f"Done in {elapsed:.5f} seconds.")


if __name__ == "__main__":
run()