From 3298dee26476082e752853885e00de097093a44e Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 29 Jul 2024 23:20:34 +0000 Subject: [PATCH 01/18] Bump nokogiri from 1.16.6 to 1.16.7 Bumps [nokogiri](https://github.com/sparklemotion/nokogiri) from 1.16.6 to 1.16.7. - [Release notes](https://github.com/sparklemotion/nokogiri/releases) - [Changelog](https://github.com/sparklemotion/nokogiri/blob/main/CHANGELOG.md) - [Commits](https://github.com/sparklemotion/nokogiri/compare/v1.16.6...v1.16.7) --- updated-dependencies: - dependency-name: nokogiri dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] --- Gemfile.lock | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Gemfile.lock b/Gemfile.lock index b832b3718..9ff8a275e 100644 --- a/Gemfile.lock +++ b/Gemfile.lock @@ -57,7 +57,7 @@ GEM jekyll (>= 3.5, < 5.0) jekyll-feed (~> 0.9) jekyll-seo-tag (~> 2.1) - nokogiri (1.16.6) + nokogiri (1.16.7) mini_portile2 (~> 2.8.2) racc (~> 1.4) pathutil (0.16.2) From 9cd114cc7d04c06351c2d856a2a49f183540f3a5 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 5 Aug 2024 23:27:25 +0000 Subject: [PATCH 02/18] Bump carlosperate/arm-none-eabi-gcc-action from 1.9.0 to 1.9.1 Bumps [carlosperate/arm-none-eabi-gcc-action](https://github.com/carlosperate/arm-none-eabi-gcc-action) from 1.9.0 to 1.9.1. - [Release notes](https://github.com/carlosperate/arm-none-eabi-gcc-action/releases) - [Changelog](https://github.com/carlosperate/arm-none-eabi-gcc-action/blob/main/CHANGELOG.md) - [Commits](https://github.com/carlosperate/arm-none-eabi-gcc-action/compare/v1.9.0...v1.9.1) --- updated-dependencies: - dependency-name: carlosperate/arm-none-eabi-gcc-action dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] --- .github/workflows/build.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index 5ed1fee9c..b3af53936 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -37,7 +37,7 @@ jobs: with: version: 1.10.2 - name: Install arm-none-eabi-gcc GNU Arm Embedded Toolchain - uses: carlosperate/arm-none-eabi-gcc-action@v1.9.0 + uses: carlosperate/arm-none-eabi-gcc-action@v1.9.1 - name: Install Doxygen run: | wget https://www.doxygen.nl/files/doxygen-1.10.0.linux.bin.tar.gz From 4b5fde7b86be629e9eaefbc3bbd79ce9173ff911 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 5 Aug 2024 23:31:16 +0000 Subject: [PATCH 03/18] Bump wdm from 0.1.1 to 0.2.0 Bumps [wdm](https://github.com/Maher4Ever/wdm) from 0.1.1 to 0.2.0. - [Release notes](https://github.com/Maher4Ever/wdm/releases) - [Commits](https://github.com/Maher4Ever/wdm/commits) --- updated-dependencies: - dependency-name: wdm dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] --- Gemfile | 2 +- Gemfile.lock | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/Gemfile b/Gemfile index 7a50b47b0..a54d14c20 100644 --- a/Gemfile +++ b/Gemfile @@ -31,7 +31,7 @@ install_if -> { RUBY_PLATFORM =~ %r!mingw|mswin|java! } do end # Performance-booster for watching directories on Windows -gem "wdm", "~> 0.1.0", :install_if => Gem.win_platform? +gem "wdm", "~> 0.2.0", :install_if => Gem.win_platform? gem "nokogiri", "~> 1.16" diff --git a/Gemfile.lock b/Gemfile.lock index b832b3718..831a870b2 100644 --- a/Gemfile.lock +++ b/Gemfile.lock @@ -87,7 +87,7 @@ GEM tzinfo-data (1.2024.1) tzinfo (>= 1.0.0) unicode-display_width (2.5.0) - wdm (0.1.1) + wdm (0.2.0) webrick (1.8.1) PLATFORMS @@ -103,7 +103,7 @@ DEPENDENCIES thread_safe (~> 0.3.5) tzinfo (~> 2.0) tzinfo-data - wdm (~> 0.1.0) + wdm (~> 0.2.0) BUNDLED WITH 2.3.22 From cbfff7d7b67821f80cb50e943e10aa810eeaf49a Mon Sep 17 00:00:00 2001 From: Andrew Scheller Date: Mon, 12 Aug 2024 17:52:15 +0100 Subject: [PATCH 04/18] Standardise whitespace and shebang in Python scripts --- scripts/create_output_supplemental_data.py | 34 +-- scripts/postprocess_doxygen_adoc.py | 264 +++++++++---------- scripts/postprocess_doxygen_xml.py | 284 ++++++++++----------- scripts/tests/test_doxygen_adoc.py | 152 +++++------ tests/test_create_build_adoc.py | 2 + tests/test_create_build_adoc_doxygen.py | 2 + tests/test_create_build_adoc_include.py | 2 + tests/test_create_nav.py | 2 + 8 files changed, 377 insertions(+), 365 deletions(-) diff --git a/scripts/create_output_supplemental_data.py b/scripts/create_output_supplemental_data.py index b24e05e73..4052046f1 100755 --- a/scripts/create_output_supplemental_data.py +++ b/scripts/create_output_supplemental_data.py @@ -6,24 +6,24 @@ import re def get_release_version(doxyfile_path): - version = "unknown" - with open(doxyfile_path) as f: - doxy_content = f.read() - version_search = re.search(r"(\nPROJECT_NUMBER\s*=\s*)([\d.]+)", doxy_content) - if version_search is not None: - version = version_search.group(2) - return version + version = "unknown" + with open(doxyfile_path) as f: + doxy_content = f.read() + version_search = re.search(r"(\nPROJECT_NUMBER\s*=\s*)([\d.]+)", doxy_content) + if version_search is not None: + version = version_search.group(2) + return version def write_new_data_file(output_json_file, data_obj): - f = open(output_json_file, 'w') - f.write(json.dumps(data_obj)) - f.close() + f = open(output_json_file, 'w') + f.write(json.dumps(data_obj)) + f.close() if __name__ == "__main__": - # read the doxygen config file - doxyfile_path = sys.argv[1] - # output the new data file - output_json_file = sys.argv[2] - version = get_release_version(doxyfile_path) - data_obj = {"pico_sdk_release": version} - write_new_data_file(output_json_file, data_obj) + # read the doxygen config file + doxyfile_path = sys.argv[1] + # output the new data file + output_json_file = sys.argv[2] + version = get_release_version(doxyfile_path) + data_obj = {"pico_sdk_release": version} + write_new_data_file(output_json_file, data_obj) diff --git a/scripts/postprocess_doxygen_adoc.py b/scripts/postprocess_doxygen_adoc.py index bf6d4fe0a..eab442bf0 100644 --- a/scripts/postprocess_doxygen_adoc.py +++ b/scripts/postprocess_doxygen_adoc.py @@ -1,149 +1,151 @@ +#!/usr/bin/env python3 + import re import sys import os import json def cleanup_text_page(adoc_file, output_adoc_path, link_targets): - filename = os.path.basename(adoc_file) - with open(adoc_file) as f: - adoc_content = f.read() - # remove any errant spaces before anchors - adoc_content = re.sub(r'( +)(\[\[[^[]*?\]\])', "\\2", adoc_content) - # collect link targets - for line in adoc_content.split('\n'): - link_targets = collect_link_target(line, filename) - with open(adoc_file, 'w') as f: - f.write(adoc_content) - return link_targets + filename = os.path.basename(adoc_file) + with open(adoc_file) as f: + adoc_content = f.read() + # remove any errant spaces before anchors + adoc_content = re.sub(r'( +)(\[\[[^[]*?\]\])', "\\2", adoc_content) + # collect link targets + for line in adoc_content.split('\n'): + link_targets = collect_link_target(line, filename) + with open(adoc_file, 'w') as f: + f.write(adoc_content) + return link_targets def collect_link_target(line, chapter_filename): - # collect a list of all link targets, so we can fix internal links - l = re.search(r'(#)([^,\]]+)([,\]])', line) - if l is not None: - link_targets[l.group(2)] = chapter_filename - return link_targets + # collect a list of all link targets, so we can fix internal links + l = re.search(r'(#)([^,\]]+)([,\]])', line) + if l is not None: + link_targets[l.group(2)] = chapter_filename + return link_targets def resolve_links(adoc_file, link_targets): - filename = os.path.basename(adoc_file) - with open(adoc_file) as f: - adoc_content = f.read() - output_content = [] - for line in adoc_content.split('\n'): - # e.g., <> - m = re.search("(<<)([^,]+)(,?[^>]*>>)", line) - if m is not None: - target = m.group(2) - # only resolve link if it points to another file - if target in link_targets and link_targets[target] != filename: - new_target = link_targets[target]+"#"+target - line = re.sub("(<<)([^,]+)(,?[^>]*>>)", f"\\1{new_target}\\3", line) - output_content.append(line) - with open(adoc_file, 'w') as f: - f.write('\n'.join(output_content)) - return + filename = os.path.basename(adoc_file) + with open(adoc_file) as f: + adoc_content = f.read() + output_content = [] + for line in adoc_content.split('\n'): + # e.g., <> + m = re.search("(<<)([^,]+)(,?[^>]*>>)", line) + if m is not None: + target = m.group(2) + # only resolve link if it points to another file + if target in link_targets and link_targets[target] != filename: + new_target = link_targets[target]+"#"+target + line = re.sub("(<<)([^,]+)(,?[^>]*>>)", f"\\1{new_target}\\3", line) + output_content.append(line) + with open(adoc_file, 'w') as f: + f.write('\n'.join(output_content)) + return def build_json(sections, output_path): - json_path = os.path.join(output_path, "picosdk_index.json") - with open(json_path, 'w') as f: - f.write(json.dumps(sections, indent="\t")) - return + json_path = os.path.join(output_path, "picosdk_index.json") + with open(json_path, 'w') as f: + f.write(json.dumps(sections, indent="\t")) + return def tag_content(adoc_content): - # this is dependent on the same order of attributes every time - ids_to_tag = re.findall(r'(\[#)(.*?)(,.*?contextspecific,tag=)(.*?)(,type=)(.*?)(\])', adoc_content) - for this_id in ids_to_tag: - tag = re.sub("PICO_", "", this_id[3]) - img = f" [.contexttag {tag}]*{tag}*" - # `void <> ()`:: An rp2040 function. - adoc_content = re.sub(rf'(\n`.*?<<{this_id[1]},.*?`)(::)', f"\\1{img}\\2", adoc_content) - # |<>\n|Low-level types and (atomic) accessors for memory-mapped hardware registers. - adoc_content = re.sub(rf'(\n\|<<{this_id[1]},.*?>>\n\|.*?)(\n)', f"\\1{img}\\2", adoc_content) - # [#group_cyw43_ll_1ga0411cd49bb5b71852cecd93bcbf0ca2d,role=contextspecific,tag=PICO_RP2040,type=PICO_RP2040]\n=== anonymous enum - HEADING_RE = re.compile(r'(\[#.*?role=contextspecific.*?tag=P?I?C?O?_?)(.*?)(,.*?\]\s*?\n\s*=+\s+\S*?)(\n)') - # [#group_cyw43_ll_1ga0411cd49bb5b71852cecd93bcbf0ca2d,role=h6 contextspecific,tag=PICO_RP2040,type=PICO_RP2040]\n*anonymous enum* - H6_HEADING_RE = re.compile(r'(\[#.*?role=h6 contextspecific.*?tag=P?I?C?O?_?)(.*?)(,.*?\]\s*?\n\s*\*\S+.*?)(\n)') - # [#group_cyw43_ll_1ga0411cd49bb5b71852cecd93bcbf0ca2d,role=h6 contextspecific,tag=PICO_RP2040,type=PICO_RP2040]\n---- - NONHEADING_RE = re.compile(r'(\[#.*?role=h?6?\s?contextspecific.*?tag=P?I?C?O?_?)(.*?)(,.*?\]\s*?\n\s*[^=\*])') - adoc_content = re.sub(HEADING_RE, f'\\1\\2\\3 [.contexttag \\2]*\\2*\n', adoc_content) - adoc_content = re.sub(H6_HEADING_RE, f'\\1\\2\\3 [.contexttag \\2]*\\2*\n', adoc_content) - adoc_content = re.sub(NONHEADING_RE, f'[.contexttag \\2]*\\2*\n\n\\1\\2\\3', adoc_content) - return adoc_content + # this is dependent on the same order of attributes every time + ids_to_tag = re.findall(r'(\[#)(.*?)(,.*?contextspecific,tag=)(.*?)(,type=)(.*?)(\])', adoc_content) + for this_id in ids_to_tag: + tag = re.sub("PICO_", "", this_id[3]) + img = f" [.contexttag {tag}]*{tag}*" + # `void <> ()`:: An rp2040 function. + adoc_content = re.sub(rf'(\n`.*?<<{this_id[1]},.*?`)(::)', f"\\1{img}\\2", adoc_content) + # |<>\n|Low-level types and (atomic) accessors for memory-mapped hardware registers. + adoc_content = re.sub(rf'(\n\|<<{this_id[1]},.*?>>\n\|.*?)(\n)', f"\\1{img}\\2", adoc_content) + # [#group_cyw43_ll_1ga0411cd49bb5b71852cecd93bcbf0ca2d,role=contextspecific,tag=PICO_RP2040,type=PICO_RP2040]\n=== anonymous enum + HEADING_RE = re.compile(r'(\[#.*?role=contextspecific.*?tag=P?I?C?O?_?)(.*?)(,.*?\]\s*?\n\s*=+\s+\S*?)(\n)') + # [#group_cyw43_ll_1ga0411cd49bb5b71852cecd93bcbf0ca2d,role=h6 contextspecific,tag=PICO_RP2040,type=PICO_RP2040]\n*anonymous enum* + H6_HEADING_RE = re.compile(r'(\[#.*?role=h6 contextspecific.*?tag=P?I?C?O?_?)(.*?)(,.*?\]\s*?\n\s*\*\S+.*?)(\n)') + # [#group_cyw43_ll_1ga0411cd49bb5b71852cecd93bcbf0ca2d,role=h6 contextspecific,tag=PICO_RP2040,type=PICO_RP2040]\n---- + NONHEADING_RE = re.compile(r'(\[#.*?role=h?6?\s?contextspecific.*?tag=P?I?C?O?_?)(.*?)(,.*?\]\s*?\n\s*[^=\*])') + adoc_content = re.sub(HEADING_RE, f'\\1\\2\\3 [.contexttag \\2]*\\2*\n', adoc_content) + adoc_content = re.sub(H6_HEADING_RE, f'\\1\\2\\3 [.contexttag \\2]*\\2*\n', adoc_content) + adoc_content = re.sub(NONHEADING_RE, f'[.contexttag \\2]*\\2*\n\n\\1\\2\\3', adoc_content) + return adoc_content def postprocess_doxygen_adoc(adoc_file, output_adoc_path, link_targets): - output_path = re.sub(r'[^/]+$', "", adoc_file) - sections = [{ - "group_id": "index_doxygen", - "name": "Introduction", - "description": "An introduction to the Pico SDK", - "html": "index_doxygen.html", - "subitems": [] - }] - with open(adoc_file) as f: - adoc_content = f.read() - # first, lets add any tags - adoc_content = tag_content(adoc_content) - # now split the file into top-level sections: - # toolchain expects all headings to be two levels lower - adoc_content = re.sub(r'(\n==)(=+ \S+)', "\n\\2", adoc_content) - # then make it easier to match the chapter breaks - adoc_content = re.sub(r'(\[#.*?,reftext=".*?"\])(\s*\n)(= )', "\\1\\3", adoc_content) - # find all the chapter descriptions, to use later - descriptions = re.findall(r'(\[#.*?,reftext=".*?"\])(= .*?\n\s*\n)(.*?)(\n)', adoc_content) - CHAPTER_START_RE = re.compile(r'(\[#)(.*?)(,reftext=".*?"\]= )(.*?$)') - # check line by line; if the line matches our chapter break, - # then pull all following lines into the chapter list until a new match. - chapter_filename = "all_groups.adoc" - current_chapter = None - chapter_dict = {} - counter = 0 - for line in adoc_content.split('\n'): - link_targets = collect_link_target(line, chapter_filename) - m = CHAPTER_START_RE.match(line) - if m is not None: - # write the previous chapter - if current_chapter is not None: - with open(chapter_path, 'w') as f: - f.write('\n'.join(current_chapter)) - # start the new chapter - current_chapter = [] - # set the data for this chapter - group_id = re.sub("^group_+", "", m.group(2)) - chapter_filename = group_id+".adoc" - chapter_path = os.path.join(output_path, chapter_filename) - chapter_dict = { - "group_id": group_id, - "html": group_id+".html", - "name": m.group(4), - "subitems": [], - "description": descriptions[counter][2] - } - sections.append(chapter_dict) - # re-split the line into 2 - start_line = re.sub("= ", "\n= ", line) - current_chapter.append(start_line) - counter += 1 - else: - current_chapter.append(line) - # write the last chapter - if current_chapter is not None: - with open(chapter_path, 'w') as f: - f.write('\n'.join(current_chapter)) - build_json(sections, output_path) - os.remove(adoc_file) - return link_targets + output_path = re.sub(r'[^/]+$', "", adoc_file) + sections = [{ + "group_id": "index_doxygen", + "name": "Introduction", + "description": "An introduction to the Pico SDK", + "html": "index_doxygen.html", + "subitems": [] + }] + with open(adoc_file) as f: + adoc_content = f.read() + # first, lets add any tags + adoc_content = tag_content(adoc_content) + # now split the file into top-level sections: + # toolchain expects all headings to be two levels lower + adoc_content = re.sub(r'(\n==)(=+ \S+)', "\n\\2", adoc_content) + # then make it easier to match the chapter breaks + adoc_content = re.sub(r'(\[#.*?,reftext=".*?"\])(\s*\n)(= )', "\\1\\3", adoc_content) + # find all the chapter descriptions, to use later + descriptions = re.findall(r'(\[#.*?,reftext=".*?"\])(= .*?\n\s*\n)(.*?)(\n)', adoc_content) + CHAPTER_START_RE = re.compile(r'(\[#)(.*?)(,reftext=".*?"\]= )(.*?$)') + # check line by line; if the line matches our chapter break, + # then pull all following lines into the chapter list until a new match. + chapter_filename = "all_groups.adoc" + current_chapter = None + chapter_dict = {} + counter = 0 + for line in adoc_content.split('\n'): + link_targets = collect_link_target(line, chapter_filename) + m = CHAPTER_START_RE.match(line) + if m is not None: + # write the previous chapter + if current_chapter is not None: + with open(chapter_path, 'w') as f: + f.write('\n'.join(current_chapter)) + # start the new chapter + current_chapter = [] + # set the data for this chapter + group_id = re.sub("^group_+", "", m.group(2)) + chapter_filename = group_id+".adoc" + chapter_path = os.path.join(output_path, chapter_filename) + chapter_dict = { + "group_id": group_id, + "html": group_id+".html", + "name": m.group(4), + "subitems": [], + "description": descriptions[counter][2] + } + sections.append(chapter_dict) + # re-split the line into 2 + start_line = re.sub("= ", "\n= ", line) + current_chapter.append(start_line) + counter += 1 + else: + current_chapter.append(line) + # write the last chapter + if current_chapter is not None: + with open(chapter_path, 'w') as f: + f.write('\n'.join(current_chapter)) + build_json(sections, output_path) + os.remove(adoc_file) + return link_targets if __name__ == '__main__': - output_adoc_path = sys.argv[1] - adoc_files = [f for f in os.listdir(output_adoc_path) if re.search(".adoc", f) is not None] - link_targets = {} - for adoc_file in adoc_files: - adoc_filepath = os.path.join(output_adoc_path, adoc_file) - if re.search("all_groups.adoc", adoc_file) is not None: - link_targets = postprocess_doxygen_adoc(adoc_filepath, output_adoc_path, link_targets) - else: - link_targets = cleanup_text_page(adoc_filepath, output_adoc_path, link_targets) - # now that we have a complete list of all link targets, resolve all internal links - adoc_files = [f for f in os.listdir(output_adoc_path) if re.search(".adoc", f) is not None] - for adoc_file in adoc_files: - adoc_filepath = os.path.join(output_adoc_path, adoc_file) - resolve_links(adoc_filepath, link_targets) + output_adoc_path = sys.argv[1] + adoc_files = [f for f in os.listdir(output_adoc_path) if re.search(".adoc", f) is not None] + link_targets = {} + for adoc_file in adoc_files: + adoc_filepath = os.path.join(output_adoc_path, adoc_file) + if re.search("all_groups.adoc", adoc_file) is not None: + link_targets = postprocess_doxygen_adoc(adoc_filepath, output_adoc_path, link_targets) + else: + link_targets = cleanup_text_page(adoc_filepath, output_adoc_path, link_targets) + # now that we have a complete list of all link targets, resolve all internal links + adoc_files = [f for f in os.listdir(output_adoc_path) if re.search(".adoc", f) is not None] + for adoc_file in adoc_files: + adoc_filepath = os.path.join(output_adoc_path, adoc_file) + resolve_links(adoc_filepath, link_targets) diff --git a/scripts/postprocess_doxygen_xml.py b/scripts/postprocess_doxygen_xml.py index b0f0b9e16..b2ae89314 100755 --- a/scripts/postprocess_doxygen_xml.py +++ b/scripts/postprocess_doxygen_xml.py @@ -13,157 +13,157 @@ # instead of searching every xml every time, make a list of available functions in each xml def compile_id_list(xml_content): - # get any element that has an id - els = xml_content.find_all(id=True) - id_list = [x["id"] for x in els] - return id_list + # get any element that has an id + els = xml_content.find_all(id=True) + id_list = [x["id"] for x in els] + return id_list def insert_example_code_from_file(combined_content): - els = combined_content.doxygen.find_all("programlisting") - all_examples = {} - # get the examples path - examples_path = re.sub(r"/scripts/.+$", "/lib/pico-examples", os.path.realpath(__file__)) - # get a recursive list of all files in examples - for f in os.walk(examples_path): - for filename in f[2]: - if filename in all_examples: - all_examples[filename].append(os.path.join(f[0], filename)) - else: - all_examples[filename] = [os.path.join(f[0], filename)] - for el in els: - if el.get("filename") is not None: - filename = el.get("filename") - # find the file here or in examples - if filename in all_examples: - with open(all_examples[filename][0]) as f: - example_content = f.read() - example_lines = example_content.split("\n") - for line in example_lines: - codeline = BeautifulSoup(""+html.escape(line)+"", 'xml') - el.append(codeline) - return combined_content + els = combined_content.doxygen.find_all("programlisting") + all_examples = {} + # get the examples path + examples_path = re.sub(r"/scripts/.+$", "/lib/pico-examples", os.path.realpath(__file__)) + # get a recursive list of all files in examples + for f in os.walk(examples_path): + for filename in f[2]: + if filename in all_examples: + all_examples[filename].append(os.path.join(f[0], filename)) + else: + all_examples[filename] = [os.path.join(f[0], filename)] + for el in els: + if el.get("filename") is not None: + filename = el.get("filename") + # find the file here or in examples + if filename in all_examples: + with open(all_examples[filename][0]) as f: + example_content = f.read() + example_lines = example_content.split("\n") + for line in example_lines: + codeline = BeautifulSoup(""+html.escape(line)+"", 'xml') + el.append(codeline) + return combined_content def walk_and_tag_xml_tree(el, output_contexts, all_contexts): - """ - Process an individual xml file, adding context-specific tags as needed. + """ + Process an individual xml file, adding context-specific tags as needed. - For performance purposes (to avoid traversing multiple dicts for every element), - we use element IDs as the key, and the contexts it belongs to as the value. - Thus, output_contexts will look something like this: - { - "group__hardware__gpio_1gaecd01f57f1cac060abe836793f7bea18": [ - "PICO_RP2040", - "FOO" - ], - "group__hardware__gpio_1ga7becbc8db22ff0a54707029a2c0010e6": [ - "PICO_RP2040" - ], - "group__hardware__gpio_1ga192335a098d40e08b23cc6d4e0513786": [ - "PICO_RP2040" - ], - "group__hardware__gpio_1ga8510fa7c1bf1c6e355631b0a2861b22b": [ - "FOO", - "BAR" - ], - "group__hardware__gpio_1ga5d7dbadb2233e2e6627e9101411beb27": [ - "FOO" - ] - } - """ - targets = [] - if el.get('id') is not None: - myid = el["id"] - if myid in output_contexts: - targets = output_contexts[myid] - # if this content is in all contexts, no label is required - if len(targets) > 0 and len(targets) < len(all_contexts): - el["role"] = "contextspecific" - el["tag"] = ', '.join(targets) - if len(targets) > 1: - el["type"] = "multi" - else: - el["type"] = targets[0] - # only check nested children if the parent has NOT been tagged as context-specific - else: - # for child in el.iterchildren(): - for child in el.find_all(True, recursive=False): - walk_and_tag_xml_tree(child, output_contexts, all_contexts) - else: - for child in el.find_all(True, recursive=False): - walk_and_tag_xml_tree(child, output_contexts, all_contexts) - return + For performance purposes (to avoid traversing multiple dicts for every element), + we use element IDs as the key, and the contexts it belongs to as the value. + Thus, output_contexts will look something like this: + { + "group__hardware__gpio_1gaecd01f57f1cac060abe836793f7bea18": [ + "PICO_RP2040", + "FOO" + ], + "group__hardware__gpio_1ga7becbc8db22ff0a54707029a2c0010e6": [ + "PICO_RP2040" + ], + "group__hardware__gpio_1ga192335a098d40e08b23cc6d4e0513786": [ + "PICO_RP2040" + ], + "group__hardware__gpio_1ga8510fa7c1bf1c6e355631b0a2861b22b": [ + "FOO", + "BAR" + ], + "group__hardware__gpio_1ga5d7dbadb2233e2e6627e9101411beb27": [ + "FOO" + ] + } + """ + targets = [] + if el.get('id') is not None: + myid = el["id"] + if myid in output_contexts: + targets = output_contexts[myid] + # if this content is in all contexts, no label is required + if len(targets) > 0 and len(targets) < len(all_contexts): + el["role"] = "contextspecific" + el["tag"] = ', '.join(targets) + if len(targets) > 1: + el["type"] = "multi" + else: + el["type"] = targets[0] + # only check nested children if the parent has NOT been tagged as context-specific + else: + # for child in el.iterchildren(): + for child in el.find_all(True, recursive=False): + walk_and_tag_xml_tree(child, output_contexts, all_contexts) + else: + for child in el.find_all(True, recursive=False): + walk_and_tag_xml_tree(child, output_contexts, all_contexts) + return def postprocess_doxygen_xml_file(combined_xmlfile, xmlfiles, output_context_paths): - """ - Process an individual xml file, adding context-specific tags as needed. + """ + Process an individual xml file, adding context-specific tags as needed. - xmlfiles will look something like this: - { - "PICO_RP2040": "/path/to/PICO_RP2040/myfilename.xml", - "FOO": "/path/to/FOO/myfilename.xml" - } - """ - output_contexts = {} - for item in xmlfiles: - label = item - # parse the xml file - with open(xmlfiles[item], encoding="utf-8") as f: - xml_content = BeautifulSoup(f, 'xml') - # compile a list of all element ids within the file - id_list = compile_id_list(xml_content.doxygen) - # create the map of ids and their contexts (see example above) - for myid in id_list: - if myid in output_contexts: - output_contexts[myid].append(label) - else: - output_contexts[myid] = [label] - with open(combined_xmlfile, encoding="utf-8") as f: - combined_content = BeautifulSoup(f, 'xml') - # start with top-level children, and then walk the tree as appropriate - els = combined_content.doxygen.find_all(True, recursive=False) - for el in els: - walk_and_tag_xml_tree(el, output_contexts, list(output_context_paths.keys())) - combined_content = insert_example_code_from_file(combined_content) - return str(combined_content) + xmlfiles will look something like this: + { + "PICO_RP2040": "/path/to/PICO_RP2040/myfilename.xml", + "FOO": "/path/to/FOO/myfilename.xml" + } + """ + output_contexts = {} + for item in xmlfiles: + label = item + # parse the xml file + with open(xmlfiles[item], encoding="utf-8") as f: + xml_content = BeautifulSoup(f, 'xml') + # compile a list of all element ids within the file + id_list = compile_id_list(xml_content.doxygen) + # create the map of ids and their contexts (see example above) + for myid in id_list: + if myid in output_contexts: + output_contexts[myid].append(label) + else: + output_contexts[myid] = [label] + with open(combined_xmlfile, encoding="utf-8") as f: + combined_content = BeautifulSoup(f, 'xml') + # start with top-level children, and then walk the tree as appropriate + els = combined_content.doxygen.find_all(True, recursive=False) + for el in els: + walk_and_tag_xml_tree(el, output_contexts, list(output_context_paths.keys())) + combined_content = insert_example_code_from_file(combined_content) + return str(combined_content) def postprocess_doxygen_xml(xml_path): - """ - Expectation is that xml for each context will be generated - within a subfolder titled with the context name, e.g.: - - doxygen_build/ - - combined/ - - PICO_RP2040/ - - FOO/ - """ - # collect a list of all context-specific subdirs - skip = ["index.xml", "Doxyfile.xml"] - output_context_paths = {} - combined_output_path = None - for item in list(filter(lambda x: os.path.isdir(os.path.join(xml_path, x)), os.listdir(xml_path))): - if item == "combined": - # if doxygen ever changes the output path for the xml, this will need to be updated - combined_output_path = os.path.join(xml_path, item, "docs", "doxygen", "xml") - else: - # same as above - output_context_paths[item] = os.path.join(xml_path, item, "docs", "doxygen", "xml") - # we need to process all generated xml files - for combined_xmlfile in list(filter(lambda x: re.search(r'\.xml$', x) is not None, os.listdir(combined_output_path))): - # skip the index -- it's just a listing - if combined_xmlfile not in skip: - xmlfiles = {} - # get all context-specific versions of this file - for context in output_context_paths: - if os.path.isfile(os.path.join(output_context_paths[context], combined_xmlfile)): - xmlfiles[context] = os.path.join(output_context_paths[context], combined_xmlfile) - combined_content = postprocess_doxygen_xml_file(os.path.join(combined_output_path, combined_xmlfile), xmlfiles, output_context_paths) - # write the output - with open(os.path.join(combined_output_path, combined_xmlfile), 'w') as f: - f.write(combined_content) - return + """ + Expectation is that xml for each context will be generated + within a subfolder titled with the context name, e.g.: + - doxygen_build/ + - combined/ + - PICO_RP2040/ + - FOO/ + """ + # collect a list of all context-specific subdirs + skip = ["index.xml", "Doxyfile.xml"] + output_context_paths = {} + combined_output_path = None + for item in list(filter(lambda x: os.path.isdir(os.path.join(xml_path, x)), os.listdir(xml_path))): + if item == "combined": + # if doxygen ever changes the output path for the xml, this will need to be updated + combined_output_path = os.path.join(xml_path, item, "docs", "doxygen", "xml") + else: + # same as above + output_context_paths[item] = os.path.join(xml_path, item, "docs", "doxygen", "xml") + # we need to process all generated xml files + for combined_xmlfile in list(filter(lambda x: re.search(r'\.xml$', x) is not None, os.listdir(combined_output_path))): + # skip the index -- it's just a listing + if combined_xmlfile not in skip: + xmlfiles = {} + # get all context-specific versions of this file + for context in output_context_paths: + if os.path.isfile(os.path.join(output_context_paths[context], combined_xmlfile)): + xmlfiles[context] = os.path.join(output_context_paths[context], combined_xmlfile) + combined_content = postprocess_doxygen_xml_file(os.path.join(combined_output_path, combined_xmlfile), xmlfiles, output_context_paths) + # write the output + with open(os.path.join(combined_output_path, combined_xmlfile), 'w') as f: + f.write(combined_content) + return if __name__ == '__main__': - xml_path = sys.argv[1] - file_path = os.path.realpath(__file__) - # splitting thse subs into two parts to make testing easier - # xml_path = re.sub(r'/documentation-toolchain/.*?$', "/"+xml_path, re.sub(r'/lib/', "/", file_path)) - postprocess_doxygen_xml(xml_path) + xml_path = sys.argv[1] + file_path = os.path.realpath(__file__) + # splitting thse subs into two parts to make testing easier + # xml_path = re.sub(r'/documentation-toolchain/.*?$', "/"+xml_path, re.sub(r'/lib/', "/", file_path)) + postprocess_doxygen_xml(xml_path) diff --git a/scripts/tests/test_doxygen_adoc.py b/scripts/tests/test_doxygen_adoc.py index e4ce20b5e..9c25f6cc7 100644 --- a/scripts/tests/test_doxygen_adoc.py +++ b/scripts/tests/test_doxygen_adoc.py @@ -1,4 +1,6 @@ -import os +#!/usr/bin/env python3 + +import os import re import unittest from pathlib import Path @@ -6,85 +8,85 @@ # to run: on the command line, from the /scripts dir: python3 -m unittest tests.test_doxygen_adoc class TestDoxygenAdoc(unittest.TestCase): - def setUp(self): - self.current_file = os.path.realpath(__file__) - self.current_dir = Path(self.current_file).parent.absolute() - self.parent_dir = re.sub("/tests", "", str(self.current_dir)) + def setUp(self): + self.current_file = os.path.realpath(__file__) + self.current_dir = Path(self.current_file).parent.absolute() + self.parent_dir = re.sub("/tests", "", str(self.current_dir)) - def tearDown(self): - pass + def tearDown(self): + pass - def test_doxygen_adoc_variables(self): - # run AFTER the content has been built; - # test will fail if ANY of the below are different or missing - expected = { - "pico-sdk/index_doxygen.adoc" : [ - ":doctitle: Introduction - Raspberry Pi Documentation", - ":page-sub_title: Introduction" - ], - "pico-sdk/hardware.adoc": [ - ":doctitle: Hardware APIs - Raspberry Pi Documentation", - ":page-sub_title: Hardware APIs" - ], - "pico-sdk/high_level.adoc": [ - ":doctitle: High Level APIs - Raspberry Pi Documentation", - ":page-sub_title: High Level APIs" - ], - "pico-sdk/third_party.adoc": [ - ":doctitle: Third-party Libraries - Raspberry Pi Documentation", - ":page-sub_title: Third-party Libraries" - ], - "pico-sdk/networking.adoc": [ - ":doctitle: Networking Libraries - Raspberry Pi Documentation", - ":page-sub_title: Networking Libraries" - ], - "pico-sdk/runtime.adoc": [ - ":doctitle: Runtime Infrastructure - Raspberry Pi Documentation", - ":page-sub_title: Runtime Infrastructure" - ], - "pico-sdk/misc.adoc": [ - ":doctitle: External API Headers - Raspberry Pi Documentation", - ":page-sub_title: External API Headers" - ] - } + def test_doxygen_adoc_variables(self): + # run AFTER the content has been built; + # test will fail if ANY of the below are different or missing + expected = { + "pico-sdk/index_doxygen.adoc" : [ + ":doctitle: Introduction - Raspberry Pi Documentation", + ":page-sub_title: Introduction" + ], + "pico-sdk/hardware.adoc": [ + ":doctitle: Hardware APIs - Raspberry Pi Documentation", + ":page-sub_title: Hardware APIs" + ], + "pico-sdk/high_level.adoc": [ + ":doctitle: High Level APIs - Raspberry Pi Documentation", + ":page-sub_title: High Level APIs" + ], + "pico-sdk/third_party.adoc": [ + ":doctitle: Third-party Libraries - Raspberry Pi Documentation", + ":page-sub_title: Third-party Libraries" + ], + "pico-sdk/networking.adoc": [ + ":doctitle: Networking Libraries - Raspberry Pi Documentation", + ":page-sub_title: Networking Libraries" + ], + "pico-sdk/runtime.adoc": [ + ":doctitle: Runtime Infrastructure - Raspberry Pi Documentation", + ":page-sub_title: Runtime Infrastructure" + ], + "pico-sdk/misc.adoc": [ + ":doctitle: External API Headers - Raspberry Pi Documentation", + ":page-sub_title: External API Headers" + ] + } - # get the appropriate working dir - file_path = os.path.join(self.parent_dir, "..", "build", "jekyll") + # get the appropriate working dir + file_path = os.path.join(self.parent_dir, "..", "build", "jekyll") - for item in expected: - print("FILE: ", item) - # find the file - this_path = os.path.join(file_path, item) - # make sure the file exists - if os.path.isfile(this_path): - # open the file and read the content - with open(this_path) as f: - content = f.read() - # find each expected line - for line in expected[item]: - print("LOOKING FOR: ", line) - match = re.search(line, content, re.M) - self.assertTrue(match is not None) - else: - print("Could not find this file. did you run `make` first?") + for item in expected: + print("FILE: ", item) + # find the file + this_path = os.path.join(file_path, item) + # make sure the file exists + if os.path.isfile(this_path): + # open the file and read the content + with open(this_path) as f: + content = f.read() + # find each expected line + for line in expected[item]: + print("LOOKING FOR: ", line) + match = re.search(line, content, re.M) + self.assertTrue(match is not None) + else: + print("Could not find this file. did you run `make` first?") def run_doxygen_adoc_tests(event, context): - suite = unittest.defaultTestLoader.loadTestsFromTestCase(TestDoxygenAdoc) - result = unittest.TextTestRunner(verbosity=2).run(suite) - if result.wasSuccessful(): - body = { "message": "Tests passed!" } - response = { - "statusCode": 200, - "body": json.dumps(body) - } - return response - else : - body = { "message": "Tests failed!" } - response = { - "statusCode": 500, - "body": json.dumps(body) - } - return response + suite = unittest.defaultTestLoader.loadTestsFromTestCase(TestDoxygenAdoc) + result = unittest.TextTestRunner(verbosity=2).run(suite) + if result.wasSuccessful(): + body = { "message": "Tests passed!" } + response = { + "statusCode": 200, + "body": json.dumps(body) + } + return response + else : + body = { "message": "Tests failed!" } + response = { + "statusCode": 500, + "body": json.dumps(body) + } + return response if __name__ == '__main__': - unittest.main() + unittest.main() diff --git a/tests/test_create_build_adoc.py b/tests/test_create_build_adoc.py index 30cf7ba4e..772bb816b 100755 --- a/tests/test_create_build_adoc.py +++ b/tests/test_create_build_adoc.py @@ -1,3 +1,5 @@ +#!/usr/bin/env python3 + import os import re import subprocess diff --git a/tests/test_create_build_adoc_doxygen.py b/tests/test_create_build_adoc_doxygen.py index 97ccee44c..0686530df 100644 --- a/tests/test_create_build_adoc_doxygen.py +++ b/tests/test_create_build_adoc_doxygen.py @@ -1,3 +1,5 @@ +#!/usr/bin/env python3 + import os import re import subprocess diff --git a/tests/test_create_build_adoc_include.py b/tests/test_create_build_adoc_include.py index e8d6eb524..ed33677d5 100644 --- a/tests/test_create_build_adoc_include.py +++ b/tests/test_create_build_adoc_include.py @@ -1,3 +1,5 @@ +#!/usr/bin/env python3 + import os import re import subprocess diff --git a/tests/test_create_nav.py b/tests/test_create_nav.py index 8ff5e33a2..0a307048d 100644 --- a/tests/test_create_nav.py +++ b/tests/test_create_nav.py @@ -1,3 +1,5 @@ +#!/usr/bin/env python3 + import os import re import subprocess From 1458333a17167a65496586434a490a71a55dd52f Mon Sep 17 00:00:00 2001 From: Andrew Scheller Date: Mon, 12 Aug 2024 23:10:42 +0100 Subject: [PATCH 05/18] Fix Makefile rule names --- Makefile | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/Makefile b/Makefile index 282428ce1..1acdff05f 100644 --- a/Makefile +++ b/Makefile @@ -26,7 +26,7 @@ JEKYLL_CMD = bundle exec jekyll .DEFAULT_GOAL := html -.PHONY: clean run_ninja clean_ninja html serve_html clean_html build_doxygen_html clean_doxygen_html build_doxygen_adoc clean_doxygen_adoc fetch_submodules clean_submodules clean_everything +.PHONY: clean run_ninja clean_ninja html serve_html clean_html build_doxygen_xml clean_doxygen_xml build_doxygen_adoc clean_doxygen_adoc fetch_submodules clean_submodules clean_everything $(BUILD_DIR): @mkdir -p $@ @@ -54,7 +54,7 @@ $(PICO_EXAMPLES_DIR)/CMakeLists.txt: | $(PICO_SDK_DIR)/CMakeLists.txt $(PICO_EXA doxygentoasciidoc/__main__.py: git submodule update --init doxygentoasciidoc -fetch_submodules: $(ALL_SUBMODULE_CMAKELISTS) +fetch_submodules: $(ALL_SUBMODULE_CMAKELISTS) doxygentoasciidoc/__main__.py # Get rid of the submodules clean_submodules: @@ -94,7 +94,7 @@ clean_doxygen_adoc: if [ -d $(ASCIIDOC_DOXYGEN_DIR) ]; then $(MAKE) clean_ninja; fi rm -rf $(ASCIIDOC_DOXYGEN_DIR) -clean_everything: clean_submodules clean_doxygen_html clean +clean_everything: clean_submodules clean_doxygen_xml clean # AUTO_NINJABUILD contains all the parts of the ninjabuild where the rules themselves depend on other files $(AUTO_NINJABUILD): $(SCRIPTS_DIR)/create_auto_ninjabuild.py $(DOCUMENTATION_INDEX) $(SITE_CONFIG) | $(BUILD_DIR) From 14676587fe62d0b009bf8264d250a727049ed668 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 12 Aug 2024 23:28:24 +0000 Subject: [PATCH 06/18] Bump pyyaml from 6.0.1 to 6.0.2 Bumps [pyyaml](https://github.com/yaml/pyyaml) from 6.0.1 to 6.0.2. - [Release notes](https://github.com/yaml/pyyaml/releases) - [Changelog](https://github.com/yaml/pyyaml/blob/main/CHANGES) - [Commits](https://github.com/yaml/pyyaml/compare/6.0.1...6.0.2) --- updated-dependencies: - dependency-name: pyyaml dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] --- requirements.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/requirements.txt b/requirements.txt index f9610bf14..d4aaeb7f9 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,3 +1,3 @@ -pyyaml == 6.0.1 +pyyaml == 6.0.2 lxml beautifulsoup4 From e3667830367da7e6ea21002894d611e7101f1ff1 Mon Sep 17 00:00:00 2001 From: Andrew Scheller Date: Mon, 12 Aug 2024 23:14:41 +0100 Subject: [PATCH 07/18] Fix usage of PICO_EXAMPLES_PATH --- Makefile | 6 ++-- scripts/create_build_adoc_doxygen.py | 5 ++- scripts/postprocess_doxygen_xml.py | 54 ++++++++++++++-------------- 3 files changed, 33 insertions(+), 32 deletions(-) diff --git a/Makefile b/Makefile index 1acdff05f..22e9feb2d 100644 --- a/Makefile +++ b/Makefile @@ -62,9 +62,9 @@ clean_submodules: # Create the pico-sdk Doxygen XML files $(DOXYGEN_XML_DIR) $(DOXYGEN_XML_DIR)/index.xml: | $(ALL_SUBMODULE_CMAKELISTS) $(DOXYGEN_PICO_SDK_BUILD_DIR) - cmake -S $(PICO_SDK_DIR) -B $(DOXYGEN_PICO_SDK_BUILD_DIR)/combined -D PICO_EXAMPLES_PATH=../$(PICO_EXAMPLES_DIR) -D PICO_PLATFORM=combined-docs - cmake -S $(PICO_SDK_DIR) -B $(DOXYGEN_PICO_SDK_BUILD_DIR)/PICO_RP2040 -D PICO_EXAMPLES_PATH=../$(PICO_EXAMPLES_DIR) -D PICO_PLATFORM=rp2040 - cmake -S $(PICO_SDK_DIR) -B $(DOXYGEN_PICO_SDK_BUILD_DIR)/PICO_RP2350 -D PICO_EXAMPLES_PATH=../$(PICO_EXAMPLES_DIR) -D PICO_PLATFORM=rp2350 + cmake -S $(PICO_SDK_DIR) -B $(DOXYGEN_PICO_SDK_BUILD_DIR)/combined -D PICO_EXAMPLES_PATH=../../$(PICO_EXAMPLES_DIR) -D PICO_PLATFORM=combined-docs + cmake -S $(PICO_SDK_DIR) -B $(DOXYGEN_PICO_SDK_BUILD_DIR)/PICO_RP2040 -D PICO_EXAMPLES_PATH=../../$(PICO_EXAMPLES_DIR) -D PICO_PLATFORM=rp2040 + cmake -S $(PICO_SDK_DIR) -B $(DOXYGEN_PICO_SDK_BUILD_DIR)/PICO_RP2350 -D PICO_EXAMPLES_PATH=../../$(PICO_EXAMPLES_DIR) -D PICO_PLATFORM=rp2350 $(MAKE) -C $(DOXYGEN_PICO_SDK_BUILD_DIR)/combined docs $(MAKE) -C $(DOXYGEN_PICO_SDK_BUILD_DIR)/PICO_RP2040 docs $(MAKE) -C $(DOXYGEN_PICO_SDK_BUILD_DIR)/PICO_RP2350 docs diff --git a/scripts/create_build_adoc_doxygen.py b/scripts/create_build_adoc_doxygen.py index 60cb5e33c..e1a806cb4 100755 --- a/scripts/create_build_adoc_doxygen.py +++ b/scripts/create_build_adoc_doxygen.py @@ -16,9 +16,8 @@ def check_no_markdown(filename): asciidoc = re.sub(r'----\n.*?\n----', '', asciidoc, flags=re.DOTALL) # strip out pass-through blocks asciidoc = re.sub(r'\+\+\+\+\n.*?\n\+\+\+\+', '', asciidoc, flags=re.DOTALL) - # This is messing up the c code blocks - # if re.search(r'(?:^|\n)#+', asciidoc): - # raise Exception("{} contains a Markdown-style header (i.e. '#' rather than '=')".format(filename)) + if re.search(r'(?:^|\n)#+', asciidoc): + raise Exception("{} contains a Markdown-style header (i.e. '#' rather than '=')".format(filename)) if re.search(r'(\[.+?\]\(.+?\))', asciidoc): raise Exception("{} contains a Markdown-style link (i.e. '[title](url)' rather than 'url[title]')".format(filename)) diff --git a/scripts/postprocess_doxygen_xml.py b/scripts/postprocess_doxygen_xml.py index b2ae89314..9af2131ee 100755 --- a/scripts/postprocess_doxygen_xml.py +++ b/scripts/postprocess_doxygen_xml.py @@ -3,7 +3,7 @@ import sys import re import os -import html +#import html from bs4 import BeautifulSoup # walk the combined output. @@ -18,30 +18,31 @@ def compile_id_list(xml_content): id_list = [x["id"] for x in els] return id_list -def insert_example_code_from_file(combined_content): - els = combined_content.doxygen.find_all("programlisting") - all_examples = {} - # get the examples path - examples_path = re.sub(r"/scripts/.+$", "/lib/pico-examples", os.path.realpath(__file__)) - # get a recursive list of all files in examples - for f in os.walk(examples_path): - for filename in f[2]: - if filename in all_examples: - all_examples[filename].append(os.path.join(f[0], filename)) - else: - all_examples[filename] = [os.path.join(f[0], filename)] - for el in els: - if el.get("filename") is not None: - filename = el.get("filename") - # find the file here or in examples - if filename in all_examples: - with open(all_examples[filename][0]) as f: - example_content = f.read() - example_lines = example_content.split("\n") - for line in example_lines: - codeline = BeautifulSoup(""+html.escape(line)+"", 'xml') - el.append(codeline) - return combined_content +# Unused code - but kept in case we need it in future +#def insert_example_code_from_file(combined_content): +# els = combined_content.doxygen.find_all("programlisting") +# all_examples = {} +# # get the examples path +# examples_path = re.sub(r"/scripts/.+$", "/lib/pico-examples", os.path.realpath(__file__)) +# # get a recursive list of all files in examples +# for f in os.walk(examples_path): +# for filename in f[2]: +# if filename in all_examples: +# all_examples[filename].append(os.path.join(f[0], filename)) +# else: +# all_examples[filename] = [os.path.join(f[0], filename)] +# for el in els: +# if el.get("filename") is not None: +# filename = el.get("filename") +# # find the file here or in examples +# if filename in all_examples: +# with open(all_examples[filename][0]) as f: +# example_content = f.read() +# example_lines = example_content.split("\n") +# for line in example_lines: +# codeline = BeautifulSoup(""+html.escape(line)+"", 'xml') +# el.append(codeline) +# return combined_content def walk_and_tag_xml_tree(el, output_contexts, all_contexts): """ @@ -123,7 +124,8 @@ def postprocess_doxygen_xml_file(combined_xmlfile, xmlfiles, output_context_path els = combined_content.doxygen.find_all(True, recursive=False) for el in els: walk_and_tag_xml_tree(el, output_contexts, list(output_context_paths.keys())) - combined_content = insert_example_code_from_file(combined_content) + # I think this was only needed because the PICO_EXAMPLES_PATH was wrong in the Makefile + #combined_content = insert_example_code_from_file(combined_content) return str(combined_content) def postprocess_doxygen_xml(xml_path): From 8a1ce2639aab8351b558a99cee26120498fc818d Mon Sep 17 00:00:00 2001 From: Andrew Scheller Date: Mon, 12 Aug 2024 23:15:52 +0100 Subject: [PATCH 08/18] Silence error message when there are no PNG files to copy --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile b/Makefile index 22e9feb2d..49b1ba9b8 100644 --- a/Makefile +++ b/Makefile @@ -85,7 +85,7 @@ $(ASCIIDOC_DOXYGEN_DIR)/picosdk_index.json $(ASCIIDOC_DOXYGEN_DIR)/index_doxygen python3 -m doxygentoasciidoc -f $(DOXYGEN_XML_DIR)/indexpage.xml -c > $(ASCIIDOC_DOXYGEN_DIR)/index_doxygen.adoc python3 -m doxygentoasciidoc -f $(DOXYGEN_XML_DIR)/examples_page.xml -c > $(ASCIIDOC_DOXYGEN_DIR)/examples_page.adoc python3 $(SCRIPTS_DIR)/postprocess_doxygen_adoc.py $(ASCIIDOC_DOXYGEN_DIR) - -cp $(DOXYGEN_XML_DIR)/*.png $(ASCIIDOC_DOXYGEN_DIR) + -cp $(DOXYGEN_XML_DIR)/*.png $(ASCIIDOC_DOXYGEN_DIR) 2>/dev/null || true build_doxygen_adoc: $(ASCIIDOC_DOXYGEN_DIR)/index_doxygen.adoc From 5c708ab0a7692a39955f50ffa66456343577e27f Mon Sep 17 00:00:00 2001 From: Andrew Scheller Date: Mon, 12 Aug 2024 23:21:58 +0100 Subject: [PATCH 09/18] Don't need Picotool to build doxygen, so don't waste time building it --- Makefile | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/Makefile b/Makefile index 49b1ba9b8..c6e7e3c86 100644 --- a/Makefile +++ b/Makefile @@ -62,9 +62,9 @@ clean_submodules: # Create the pico-sdk Doxygen XML files $(DOXYGEN_XML_DIR) $(DOXYGEN_XML_DIR)/index.xml: | $(ALL_SUBMODULE_CMAKELISTS) $(DOXYGEN_PICO_SDK_BUILD_DIR) - cmake -S $(PICO_SDK_DIR) -B $(DOXYGEN_PICO_SDK_BUILD_DIR)/combined -D PICO_EXAMPLES_PATH=../../$(PICO_EXAMPLES_DIR) -D PICO_PLATFORM=combined-docs - cmake -S $(PICO_SDK_DIR) -B $(DOXYGEN_PICO_SDK_BUILD_DIR)/PICO_RP2040 -D PICO_EXAMPLES_PATH=../../$(PICO_EXAMPLES_DIR) -D PICO_PLATFORM=rp2040 - cmake -S $(PICO_SDK_DIR) -B $(DOXYGEN_PICO_SDK_BUILD_DIR)/PICO_RP2350 -D PICO_EXAMPLES_PATH=../../$(PICO_EXAMPLES_DIR) -D PICO_PLATFORM=rp2350 + cmake -S $(PICO_SDK_DIR) -B $(DOXYGEN_PICO_SDK_BUILD_DIR)/combined -D PICO_EXAMPLES_PATH=../../$(PICO_EXAMPLES_DIR) -D PICO_NO_PICOTOOL=1 -D PICO_PLATFORM=combined-docs + cmake -S $(PICO_SDK_DIR) -B $(DOXYGEN_PICO_SDK_BUILD_DIR)/PICO_RP2040 -D PICO_EXAMPLES_PATH=../../$(PICO_EXAMPLES_DIR) -D PICO_NO_PICOTOOL=1 -D PICO_PLATFORM=rp2040 + cmake -S $(PICO_SDK_DIR) -B $(DOXYGEN_PICO_SDK_BUILD_DIR)/PICO_RP2350 -D PICO_EXAMPLES_PATH=../../$(PICO_EXAMPLES_DIR) -D PICO_NO_PICOTOOL=1 -D PICO_PLATFORM=rp2350 $(MAKE) -C $(DOXYGEN_PICO_SDK_BUILD_DIR)/combined docs $(MAKE) -C $(DOXYGEN_PICO_SDK_BUILD_DIR)/PICO_RP2040 docs $(MAKE) -C $(DOXYGEN_PICO_SDK_BUILD_DIR)/PICO_RP2350 docs From 908528caae573a379a13ce8dbf626e1c4fd47f59 Mon Sep 17 00:00:00 2001 From: Andrew Scheller Date: Mon, 12 Aug 2024 23:47:56 +0100 Subject: [PATCH 10/18] Move doxygentoasciidoc submodule into the lib directory, in common with other submodules Also set a branch for the submodule, for easier updating --- .gitmodules | 3 ++- Makefile | 15 ++++++++------- doxygentoasciidoc | 1 - lib/doxygentoasciidoc | 1 + 4 files changed, 11 insertions(+), 9 deletions(-) delete mode 160000 doxygentoasciidoc create mode 160000 lib/doxygentoasciidoc diff --git a/.gitmodules b/.gitmodules index 4532ebbdf..9f315972e 100644 --- a/.gitmodules +++ b/.gitmodules @@ -8,5 +8,6 @@ branch = master [submodule "doxygentoasciidoc"] - path = doxygentoasciidoc + path = lib/doxygentoasciidoc url = https://github.com/raspberrypi/doxygentoasciidoc.git + branch = main diff --git a/Makefile b/Makefile index c6e7e3c86..5f99db6a8 100644 --- a/Makefile +++ b/Makefile @@ -16,6 +16,7 @@ AUTO_NINJABUILD = $(BUILD_DIR)/autogenerated.ninja PICO_SDK_DIR = lib/pico-sdk PICO_EXAMPLES_DIR = lib/pico-examples +DOXYGEN_TO_ASCIIDOC_DIR = lib/doxygentoasciidoc ALL_SUBMODULE_CMAKELISTS = $(PICO_SDK_DIR)/CMakeLists.txt $(PICO_EXAMPLES_DIR)/CMakeLists.txt DOXYGEN_PICO_SDK_BUILD_DIR = build-pico-sdk-docs DOXYGEN_XML_DIR = $(DOXYGEN_PICO_SDK_BUILD_DIR)/combined/docs/doxygen/xml @@ -51,10 +52,10 @@ $(PICO_EXAMPLES_DIR)/CMakeLists.txt: | $(PICO_SDK_DIR)/CMakeLists.txt $(PICO_EXA git submodule update --init $(PICO_EXAMPLES_DIR) # Initialise doxygentoasciidoc submodule -doxygentoasciidoc/__main__.py: - git submodule update --init doxygentoasciidoc +$(DOXYGEN_TO_ASCIIDOC_DIR)/__main__.py: + git submodule update --init $(DOXYGEN_TO_ASCIIDOC_DIR) -fetch_submodules: $(ALL_SUBMODULE_CMAKELISTS) doxygentoasciidoc/__main__.py +fetch_submodules: $(ALL_SUBMODULE_CMAKELISTS) $(DOXYGEN_TO_ASCIIDOC_DIR)/__main__.py # Get rid of the submodules clean_submodules: @@ -79,11 +80,11 @@ clean_doxygen_xml: rm -rf $(DOXYGEN_PICO_SDK_BUILD_DIR) # create the sdk adoc and the json file -$(ASCIIDOC_DOXYGEN_DIR)/picosdk_index.json $(ASCIIDOC_DOXYGEN_DIR)/index_doxygen.adoc: $(ASCIIDOC_DOXYGEN_DIR) $(DOXYGEN_XML_DIR)/index.xml doxygentoasciidoc/__main__.py doxygentoasciidoc/cli.py doxygentoasciidoc/nodes.py doxygentoasciidoc/helpers.py | $(BUILD_DIR) +$(ASCIIDOC_DOXYGEN_DIR)/picosdk_index.json $(ASCIIDOC_DOXYGEN_DIR)/index_doxygen.adoc: $(ASCIIDOC_DOXYGEN_DIR) $(DOXYGEN_XML_DIR)/index.xml $(DOXYGEN_TO_ASCIIDOC_DIR)/__main__.py $(DOXYGEN_TO_ASCIIDOC_DIR)/cli.py $(DOXYGEN_TO_ASCIIDOC_DIR)/nodes.py $(DOXYGEN_TO_ASCIIDOC_DIR)/helpers.py | $(BUILD_DIR) $(MAKE) clean_ninja - python3 -m doxygentoasciidoc -f $(DOXYGEN_XML_DIR)/index.xml > $(ASCIIDOC_DOXYGEN_DIR)/all_groups.adoc - python3 -m doxygentoasciidoc -f $(DOXYGEN_XML_DIR)/indexpage.xml -c > $(ASCIIDOC_DOXYGEN_DIR)/index_doxygen.adoc - python3 -m doxygentoasciidoc -f $(DOXYGEN_XML_DIR)/examples_page.xml -c > $(ASCIIDOC_DOXYGEN_DIR)/examples_page.adoc + PYTHONPATH=$(DOXYGEN_TO_ASCIIDOC_DIR)/.. python3 -m doxygentoasciidoc -f $(DOXYGEN_XML_DIR)/index.xml > $(ASCIIDOC_DOXYGEN_DIR)/all_groups.adoc + PYTHONPATH=$(DOXYGEN_TO_ASCIIDOC_DIR)/.. python3 -m doxygentoasciidoc -f $(DOXYGEN_XML_DIR)/indexpage.xml -c > $(ASCIIDOC_DOXYGEN_DIR)/index_doxygen.adoc + PYTHONPATH=$(DOXYGEN_TO_ASCIIDOC_DIR)/.. python3 -m doxygentoasciidoc -f $(DOXYGEN_XML_DIR)/examples_page.xml -c > $(ASCIIDOC_DOXYGEN_DIR)/examples_page.adoc python3 $(SCRIPTS_DIR)/postprocess_doxygen_adoc.py $(ASCIIDOC_DOXYGEN_DIR) -cp $(DOXYGEN_XML_DIR)/*.png $(ASCIIDOC_DOXYGEN_DIR) 2>/dev/null || true diff --git a/doxygentoasciidoc b/doxygentoasciidoc deleted file mode 160000 index 70569f25b..000000000 --- a/doxygentoasciidoc +++ /dev/null @@ -1 +0,0 @@ -Subproject commit 70569f25b411d1381232b2997975cc34ac9df629 diff --git a/lib/doxygentoasciidoc b/lib/doxygentoasciidoc new file mode 160000 index 000000000..278bc0874 --- /dev/null +++ b/lib/doxygentoasciidoc @@ -0,0 +1 @@ +Subproject commit 278bc087489951a22c776ee611965d600db4547f From 6e6dca3469b8d64b99588f28e2c728c8e63d186a Mon Sep 17 00:00:00 2001 From: Andrew Scheller Date: Tue, 13 Aug 2024 16:59:31 +0100 Subject: [PATCH 11/18] Bump doxygentoasciidoc submodule Explicitly install requirements for doxygentoasciidoc Make use of the new -o parameter --- Makefile | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/Makefile b/Makefile index 5f99db6a8..7e0bc6854 100644 --- a/Makefile +++ b/Makefile @@ -80,11 +80,12 @@ clean_doxygen_xml: rm -rf $(DOXYGEN_PICO_SDK_BUILD_DIR) # create the sdk adoc and the json file -$(ASCIIDOC_DOXYGEN_DIR)/picosdk_index.json $(ASCIIDOC_DOXYGEN_DIR)/index_doxygen.adoc: $(ASCIIDOC_DOXYGEN_DIR) $(DOXYGEN_XML_DIR)/index.xml $(DOXYGEN_TO_ASCIIDOC_DIR)/__main__.py $(DOXYGEN_TO_ASCIIDOC_DIR)/cli.py $(DOXYGEN_TO_ASCIIDOC_DIR)/nodes.py $(DOXYGEN_TO_ASCIIDOC_DIR)/helpers.py | $(BUILD_DIR) +$(ASCIIDOC_DOXYGEN_DIR)/picosdk_index.json $(ASCIIDOC_DOXYGEN_DIR)/index_doxygen.adoc: $(ASCIIDOC_DOXYGEN_DIR) $(DOXYGEN_XML_DIR)/index.xml $(DOXYGEN_TO_ASCIIDOC_DIR)/__main__.py $(DOXYGEN_TO_ASCIIDOC_DIR)/cli.py $(DOXYGEN_TO_ASCIIDOC_DIR)/nodes.py $(DOXYGEN_TO_ASCIIDOC_DIR)/helpers.py | $(BUILD_DIR) $(DOXYGEN_TO_ASCIIDOC_DIR)/requirements.txt $(MAKE) clean_ninja - PYTHONPATH=$(DOXYGEN_TO_ASCIIDOC_DIR)/.. python3 -m doxygentoasciidoc -f $(DOXYGEN_XML_DIR)/index.xml > $(ASCIIDOC_DOXYGEN_DIR)/all_groups.adoc - PYTHONPATH=$(DOXYGEN_TO_ASCIIDOC_DIR)/.. python3 -m doxygentoasciidoc -f $(DOXYGEN_XML_DIR)/indexpage.xml -c > $(ASCIIDOC_DOXYGEN_DIR)/index_doxygen.adoc - PYTHONPATH=$(DOXYGEN_TO_ASCIIDOC_DIR)/.. python3 -m doxygentoasciidoc -f $(DOXYGEN_XML_DIR)/examples_page.xml -c > $(ASCIIDOC_DOXYGEN_DIR)/examples_page.adoc + pip3 install -r $(DOXYGEN_TO_ASCIIDOC_DIR)/requirements.txt + PYTHONPATH=$(DOXYGEN_TO_ASCIIDOC_DIR)/.. python3 -m doxygentoasciidoc -f $(DOXYGEN_XML_DIR)/index.xml -o $(ASCIIDOC_DOXYGEN_DIR)/all_groups.adoc + PYTHONPATH=$(DOXYGEN_TO_ASCIIDOC_DIR)/.. python3 -m doxygentoasciidoc -f $(DOXYGEN_XML_DIR)/indexpage.xml -c -o $(ASCIIDOC_DOXYGEN_DIR)/index_doxygen.adoc + PYTHONPATH=$(DOXYGEN_TO_ASCIIDOC_DIR)/.. python3 -m doxygentoasciidoc -f $(DOXYGEN_XML_DIR)/examples_page.xml -c -o $(ASCIIDOC_DOXYGEN_DIR)/examples_page.adoc python3 $(SCRIPTS_DIR)/postprocess_doxygen_adoc.py $(ASCIIDOC_DOXYGEN_DIR) -cp $(DOXYGEN_XML_DIR)/*.png $(ASCIIDOC_DOXYGEN_DIR) 2>/dev/null || true From 1df206352276ab8cd44b336730dd525e32147043 Mon Sep 17 00:00:00 2001 From: Andrew Scheller Date: Tue, 13 Aug 2024 18:12:19 +0100 Subject: [PATCH 12/18] Add additional doxygen-related dependencies to README.md --- README.md | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index aa306c04c..f9b105881 100644 --- a/README.md +++ b/README.md @@ -10,7 +10,7 @@ Instructions on how to checkout the `documentation` repo, and then install the t ### Checking out the Repository -Install `git` if you don't already have it, and check out the `documentation` repo as follows, +Install `git` if you don't already have it, and check out the `documentation` repo as follows: ``` $ git clone https://github.com/raspberrypi/documentation.git $ cd documentation @@ -22,13 +22,13 @@ $ cd documentation This works on both regular Debian or Ubuntu Linux — and has been tested in a minimal Docker container — and also under Raspberry Pi OS if you are working from a Raspberry Pi. -You can install the necessary dependencies on Linux as follows, +You can install the necessary dependencies on Linux as follows: ``` $ sudo apt install -y ruby ruby-dev python3 python3-pip make ninja-build ``` -then add these lines to the bottom of your `$HOME/.bashrc`, +then add these lines to the bottom of your `$HOME/.bashrc`: ``` export GEM_HOME="$(ruby -e 'puts Gem.user_dir')" export PATH="$PATH:$GEM_HOME/bin" @@ -157,14 +157,20 @@ $ make clean ### Building with Doxygen -If you want to build the Pico C SDK Doxygen documentation alongside the main documentation site you can do so with, +If you want to build the Pico C SDK Doxygen documentation alongside the main documentation site you will need to install some additional dependencies: + +``` +$ sudo apt install -y cmake gcc-arm-none-eabi doxygen graphviz +``` + +and then you can build the documentation with: ``` $ make build_doxygen_adoc $ make ``` -and clean up afterwards by using, +You clean up afterwards by using: ``` $ make clean_everything From e230af0bc0fc31a2f2b6f34720f1e66d5b4833af Mon Sep 17 00:00:00 2001 From: zhu-hongwei Date: Wed, 14 Aug 2024 14:27:42 +0800 Subject: [PATCH 13/18] Fix rpicam demo --- documentation/asciidoc/computers/camera/rpicam_still.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/documentation/asciidoc/computers/camera/rpicam_still.adoc b/documentation/asciidoc/computers/camera/rpicam_still.adoc index 4ed336c4d..25caf4d34 100644 --- a/documentation/asciidoc/computers/camera/rpicam_still.adoc +++ b/documentation/asciidoc/computers/camera/rpicam_still.adoc @@ -118,7 +118,7 @@ First, create a directory where you can store your time lapse photos: $ mkdir timelapse ---- -Run the following command to create a time lapse over 30 seconds, recording a photo every two seconds, saving output into `image0001.jpg` through `image0014.jpg`: +Run the following command to create a time lapse over 30 seconds, recording a photo every two seconds, saving output into `image0000.jpg` through `image0013.jpg`: [source,console] ---- From 2e1e96b0fdf4a3c630e0d0eba669bb693c0d7079 Mon Sep 17 00:00:00 2001 From: Andrew Scheller Date: Thu, 15 Aug 2024 15:10:31 +0100 Subject: [PATCH 14/18] Bump doxygentoasciidoc submodule --- lib/doxygentoasciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/doxygentoasciidoc b/lib/doxygentoasciidoc index 278bc0874..da3821a03 160000 --- a/lib/doxygentoasciidoc +++ b/lib/doxygentoasciidoc @@ -1 +1 @@ -Subproject commit 278bc087489951a22c776ee611965d600db4547f +Subproject commit da3821a031cc31d4535050ad9f332445baca1707 From 347c897bb54e569d6bfcd98e59be870d6b94597f Mon Sep 17 00:00:00 2001 From: nate contino Date: Mon, 19 Aug 2024 14:24:16 +0100 Subject: [PATCH 15/18] Update device-tree.adoc --- documentation/asciidoc/computers/configuration/device-tree.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/documentation/asciidoc/computers/configuration/device-tree.adoc b/documentation/asciidoc/computers/configuration/device-tree.adoc index c8d90c2a3..41dbdc536 100644 --- a/documentation/asciidoc/computers/configuration/device-tree.adoc +++ b/documentation/asciidoc/computers/configuration/device-tree.adoc @@ -824,7 +824,7 @@ The loading of overlays at runtime is a recent addition to the kernel, and at th [[part3.6]] ==== Supported overlays and parameters -Please refer to the https://github.com/raspberrypi/firmware/blob/master/boot/firmware/overlays/README[README] file found alongside the overlay `.dtbo` files in `/boot/firmware/overlays`. It is kept up-to-date with additions and changes. +For a list of supported overlays and parameters, see the https://github.com/raspberrypi/firmware/blob/master/boot/overlays/README[firmware README] file found alongside the overlay `.dtbo` files in `/boot/overlays`. It is kept up-to-date with additions and changes. [[part4]] === Firmware parameters From f66bb530d0c64aaf8654bf5b8b370c8e9240d1f5 Mon Sep 17 00:00:00 2001 From: nate contino Date: Mon, 19 Aug 2024 14:32:37 +0100 Subject: [PATCH 16/18] Update documentation/asciidoc/computers/configuration/device-tree.adoc --- documentation/asciidoc/computers/configuration/device-tree.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/documentation/asciidoc/computers/configuration/device-tree.adoc b/documentation/asciidoc/computers/configuration/device-tree.adoc index 41dbdc536..875ccb58f 100644 --- a/documentation/asciidoc/computers/configuration/device-tree.adoc +++ b/documentation/asciidoc/computers/configuration/device-tree.adoc @@ -824,7 +824,7 @@ The loading of overlays at runtime is a recent addition to the kernel, and at th [[part3.6]] ==== Supported overlays and parameters -For a list of supported overlays and parameters, see the https://github.com/raspberrypi/firmware/blob/master/boot/overlays/README[firmware README] file found alongside the overlay `.dtbo` files in `/boot/overlays`. It is kept up-to-date with additions and changes. +For a list of supported overlays and parameters, see the https://github.com/raspberrypi/firmware/blob/master/boot/overlays/README[firmware README] file found alongside the overlay `.dtbo` files in `/boot/firmware/overlays`. It is kept up-to-date with additions and changes. [[part4]] === Firmware parameters From 0940b8178cf75bbd4a24ca10163568f25ea78adc Mon Sep 17 00:00:00 2001 From: nate contino Date: Mon, 19 Aug 2024 14:38:19 +0100 Subject: [PATCH 17/18] Update documentation/asciidoc/computers/configuration/device-tree.adoc --- documentation/asciidoc/computers/configuration/device-tree.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/documentation/asciidoc/computers/configuration/device-tree.adoc b/documentation/asciidoc/computers/configuration/device-tree.adoc index 875ccb58f..5504bfffe 100644 --- a/documentation/asciidoc/computers/configuration/device-tree.adoc +++ b/documentation/asciidoc/computers/configuration/device-tree.adoc @@ -824,7 +824,7 @@ The loading of overlays at runtime is a recent addition to the kernel, and at th [[part3.6]] ==== Supported overlays and parameters -For a list of supported overlays and parameters, see the https://github.com/raspberrypi/firmware/blob/master/boot/overlays/README[firmware README] file found alongside the overlay `.dtbo` files in `/boot/firmware/overlays`. It is kept up-to-date with additions and changes. +For a list of supported overlays and parameters, see the https://github.com/raspberrypi/firmware/blob/master/boot/overlays/README[README] file found alongside the overlay `.dtbo` files in `/boot/firmware/overlays`. It is kept up-to-date with additions and changes. [[part4]] === Firmware parameters From 4069e4e22b9f2e6f0f8d76097c8c0f1f2cac99bd Mon Sep 17 00:00:00 2001 From: Nate Contino Date: Mon, 19 Aug 2024 10:17:25 -0400 Subject: [PATCH 18/18] Minor updates for 2GB Pi 5 --- .../asciidoc/computers/processors/bcm2712.adoc | 10 +++++++--- .../computers/raspberry-pi/introduction.adoc | 12 +++++++----- lib/doxygentoasciidoc | 2 +- 3 files changed, 15 insertions(+), 9 deletions(-) diff --git a/documentation/asciidoc/computers/processors/bcm2712.adoc b/documentation/asciidoc/computers/processors/bcm2712.adoc index a34e1ceb8..f229d9027 100644 --- a/documentation/asciidoc/computers/processors/bcm2712.adoc +++ b/documentation/asciidoc/computers/processors/bcm2712.adoc @@ -1,8 +1,8 @@ == BCM2712 -Broadcom BCM2712 is the 16nm application processor at the heart of Raspberry Pi 5. It is the successor to the BCM2711 device used in Raspberry Pi 4, and shares many common architectural features with other devices in the BCM27xx family, used on earlier Raspberry Pi products. +Broadcom BCM2712 is the 16nm application processor at the heart of Raspberry Pi 5. It is the successor to the BCM2711 device used in Raspberry Pi 4, and shares many common architectural features with other devices in the BCM27xx family, used on earlier Raspberry Pi products. 4GB and 8GB Raspberry Pi 5 models use the BCM2712**C1** stepping. -Built around a quad-core Arm Cortex-A76 CPU cluster, clocked at up to 2.4GHz, with 512KB per-core L2 caches and a 2MB shared L3 cache, it integrates an improved 12-core VideoCore VII GPU; a hardware video scaler and HDMI controller capable of driving dual 4Kp60 displays; and a Raspberry Pi-developed HEVC decoder and Image Signal Processor. A 32-bit LPDDR4X memory interface provides up to 17GB/s of memory bandwidth, while x1 and x4 PCI Express interfaces support high-bandwidth external peripherals; on Raspberry Pi 5 the latter is used to connect to the Raspberry Pi RP1 south bridge, which provides the bulk of the external-facing I/O functionality on the platform. +Built around a quad-core Arm Cortex-A76 CPU cluster, clocked at up to 2.4GHz, with 512KB per-core L2 caches and a 2MB shared L3 cache, it integrates an improved 12-core VideoCore VII GPU; a hardware video scaler and HDMI controller capable of driving dual 4Kp60 displays; and a Raspberry Pi-developed HEVC decoder and Image Signal Processor. A 32-bit LPDDR4X memory interface provides up to 17GB/s of memory bandwidth, while ×1 and ×4 PCI Express interfaces support high-bandwidth external peripherals; on Raspberry Pi 5 the latter is used to connect to the Raspberry Pi RP1 south bridge, which provides the bulk of the external-facing I/O functionality on the platform. Headline features include: @@ -23,4 +23,8 @@ Headline features include: ** H264 1080p60 decode ~50–60% of CPU ** H264 1080p30 encode (from ISP) ~30–40% CPU -In aggregate, the new features present in BCM2712 deliver a performance uplift of 2-3x over Raspberry Pi 4 for common CPU or I/O-intensive use cases. +In aggregate, the new features present in BCM2712 deliver a performance uplift of 2-3× over Raspberry Pi 4 for common CPU or I/O-intensive use cases. + +=== BCM2712D0 + +The **D0** stepping of BCM2712 removes unused functionality from BCM2712C1. There is no functional difference between the C1 and D0 steppings. Physically, the packages use the same amount of space. diff --git a/documentation/asciidoc/computers/raspberry-pi/introduction.adoc b/documentation/asciidoc/computers/raspberry-pi/introduction.adoc index d843862ef..e7c689d2f 100644 --- a/documentation/asciidoc/computers/raspberry-pi/introduction.adoc +++ b/documentation/asciidoc/computers/raspberry-pi/introduction.adoc @@ -174,8 +174,10 @@ a| ^.^a| .Raspberry Pi 5 image::images/5.jpg[alt="Raspberry Pi 5"] -| xref:processors.adoc#bcm2712[BCM2712] +| xref:processors.adoc#bcm2712[BCM2712] (2GB version uses xref:processors.adoc#bcm2712[BCM2712D0]) a| +2GB + 4GB 8GB | 40-pin GPIO header @@ -333,10 +335,6 @@ Models with the *H* suffix have header pins pre-soldered to the GPIO header. Mod |=== | Model | SoC | Memory | Storage | GPIO | Wireless Connectivity -a| -.Raspberry Pi Pico 2 -image::images/pico-2.png[alt="Raspberry Pi Pico 2"] -| xref:../microcontrollers/silicon.adoc#rp2350[RP2350] | 520KB | 2MB | 40-pin GPIO header (unpopulated) ^| none a| .Raspberry Pi Pico image::images/pico.png[alt="Raspberry Pi Pico"] @@ -359,6 +357,10 @@ image::images/pico-wh.png[alt="Raspberry Pi Pico WH"] a| * 2.4GHz single-band 802.11n Wi-Fi (10Mb/s) * Bluetooth 5.2, Bluetooth Low Energy (BLE) +a| +.Raspberry Pi Pico 2 +image::images/pico-2.png[alt="Raspberry Pi Pico 2"] +| xref:../microcontrollers/silicon.adoc#rp2350[RP2350] | 520KB | 2MB | 40-pin GPIO header (unpopulated) ^| none |=== diff --git a/lib/doxygentoasciidoc b/lib/doxygentoasciidoc index da3821a03..278bc0874 160000 --- a/lib/doxygentoasciidoc +++ b/lib/doxygentoasciidoc @@ -1 +1 @@ -Subproject commit da3821a031cc31d4535050ad9f332445baca1707 +Subproject commit 278bc087489951a22c776ee611965d600db4547f