+
+%
+
diff --git a/assets/stop-words-en.txt b/assets/stop-words-en.txt
new file mode 100644
index 0000000000..cc09be2ec7
--- /dev/null
+++ b/assets/stop-words-en.txt
@@ -0,0 +1,1320 @@
+# Stop words from https://github.com/Alir3z4/stop-words.
+
+'ll
+'tis
+'twas
+'ve
+a
+a's
+able
+ableabout
+about
+above
+abroad
+abst
+accordance
+according
+accordingly
+across
+act
+actually
+ad
+added
+adj
+adopted
+ae
+af
+affected
+affecting
+affects
+after
+afterwards
+ag
+again
+against
+ago
+ah
+ahead
+ai
+ain't
+aint
+al
+all
+allow
+allows
+almost
+alone
+along
+alongside
+already
+also
+although
+always
+am
+amid
+amidst
+among
+amongst
+amoungst
+amount
+an
+and
+announce
+another
+any
+anybody
+anyhow
+anymore
+anyone
+anything
+anyway
+anyways
+anywhere
+ao
+apart
+apparently
+appear
+appreciate
+appropriate
+approximately
+aq
+ar
+are
+area
+areas
+aren
+aren't
+arent
+arise
+around
+arpa
+as
+aside
+ask
+asked
+asking
+asks
+associated
+at
+au
+auth
+available
+aw
+away
+awfully
+az
+b
+ba
+back
+backed
+backing
+backs
+backward
+backwards
+bb
+bd
+be
+became
+because
+become
+becomes
+becoming
+been
+before
+beforehand
+began
+begin
+beginning
+beginnings
+begins
+behind
+being
+beings
+believe
+below
+beside
+besides
+best
+better
+between
+beyond
+bf
+bg
+bh
+bi
+big
+bill
+billion
+biol
+bj
+bm
+bn
+bo
+both
+bottom
+br
+brief
+briefly
+bs
+bt
+but
+buy
+bv
+bw
+by
+bz
+c
+c'mon
+c's
+ca
+call
+came
+can
+can't
+cannot
+cant
+caption
+case
+cases
+cause
+causes
+cc
+cd
+certain
+certainly
+cf
+cg
+ch
+changes
+ci
+ck
+cl
+clear
+clearly
+click
+cm
+cmon
+cn
+co
+co.
+com
+come
+comes
+computer
+con
+concerning
+consequently
+consider
+considering
+contain
+containing
+contains
+copy
+corresponding
+could
+could've
+couldn
+couldn't
+couldnt
+course
+cr
+cry
+cs
+cu
+currently
+cv
+cx
+cy
+cz
+d
+dare
+daren't
+darent
+date
+de
+dear
+definitely
+describe
+described
+despite
+detail
+did
+didn
+didn't
+didnt
+differ
+different
+differently
+directly
+dj
+dk
+dm
+do
+does
+doesn
+doesn't
+doesnt
+doing
+don
+don't
+done
+dont
+doubtful
+down
+downed
+downing
+downs
+downwards
+due
+during
+dz
+e
+each
+early
+ec
+ed
+edu
+ee
+effect
+eg
+eh
+eight
+eighty
+either
+eleven
+else
+elsewhere
+empty
+end
+ended
+ending
+ends
+enough
+entirely
+er
+es
+especially
+et
+et-al
+etc
+even
+evenly
+ever
+evermore
+every
+everybody
+everyone
+everything
+everywhere
+ex
+exactly
+example
+except
+f
+face
+faces
+fact
+facts
+fairly
+far
+farther
+felt
+few
+fewer
+ff
+fi
+fifteen
+fifth
+fifty
+fify
+fill
+find
+finds
+fire
+first
+five
+fix
+fj
+fk
+fm
+fo
+followed
+following
+follows
+for
+forever
+former
+formerly
+forth
+forty
+forward
+found
+four
+fr
+free
+from
+front
+full
+fully
+further
+furthered
+furthering
+furthermore
+furthers
+fx
+g
+ga
+gave
+gb
+gd
+ge
+general
+generally
+get
+gets
+getting
+gf
+gg
+gh
+gi
+give
+given
+gives
+giving
+gl
+gm
+gmt
+gn
+go
+goes
+going
+gone
+good
+goods
+got
+gotten
+gov
+gp
+gq
+gr
+great
+greater
+greatest
+greetings
+group
+grouped
+grouping
+groups
+gs
+gt
+gu
+gw
+gy
+h
+had
+hadn't
+hadnt
+half
+happens
+hardly
+has
+hasn
+hasn't
+hasnt
+have
+haven
+haven't
+havent
+having
+he
+he'd
+he'll
+he's
+hed
+hell
+hello
+help
+hence
+her
+here
+here's
+hereafter
+hereby
+herein
+heres
+hereupon
+hers
+herself
+herse”
+hes
+hi
+hid
+high
+higher
+highest
+him
+himself
+himse”
+his
+hither
+hk
+hm
+hn
+home
+homepage
+hopefully
+how
+how'd
+how'll
+how's
+howbeit
+however
+hr
+ht
+htm
+html
+http
+hu
+hundred
+i
+i'd
+i'll
+i'm
+i've
+i.e.
+id
+ie
+if
+ignored
+ii
+il
+ill
+im
+immediate
+immediately
+importance
+important
+in
+inasmuch
+inc
+inc.
+indeed
+index
+indicate
+indicated
+indicates
+information
+inner
+inside
+insofar
+instead
+int
+interest
+interested
+interesting
+interests
+into
+invention
+inward
+io
+iq
+ir
+is
+isn
+isn't
+isnt
+it
+it'd
+it'll
+it's
+itd
+itll
+its
+itself
+itse”
+ive
+j
+je
+jm
+jo
+join
+jp
+just
+k
+ke
+keep
+keeps
+kept
+keys
+kg
+kh
+ki
+kind
+km
+kn
+knew
+know
+known
+knows
+kp
+kr
+kw
+ky
+kz
+l
+la
+large
+largely
+last
+lately
+later
+latest
+latter
+latterly
+lb
+lc
+least
+length
+less
+lest
+let
+let's
+lets
+li
+like
+liked
+likely
+likewise
+line
+little
+lk
+ll
+long
+longer
+longest
+look
+looking
+looks
+low
+lower
+lr
+ls
+lt
+ltd
+lu
+lv
+ly
+m
+ma
+made
+mainly
+make
+makes
+making
+man
+many
+may
+maybe
+mayn't
+maynt
+mc
+md
+me
+mean
+means
+meantime
+meanwhile
+member
+members
+men
+merely
+mg
+mh
+microsoft
+might
+might've
+mightn't
+mightnt
+mil
+mill
+million
+mine
+minus
+miss
+mk
+ml
+mm
+mn
+mo
+more
+moreover
+most
+mostly
+move
+mp
+mq
+mr
+mrs
+ms
+msie
+mt
+mu
+much
+mug
+must
+must've
+mustn't
+mustnt
+mv
+mw
+mx
+my
+myself
+myse”
+mz
+n
+na
+name
+namely
+nay
+nc
+nd
+ne
+near
+nearly
+necessarily
+necessary
+need
+needed
+needing
+needn't
+neednt
+needs
+neither
+net
+netscape
+never
+neverf
+neverless
+nevertheless
+new
+newer
+newest
+next
+nf
+ng
+ni
+nine
+ninety
+nl
+no
+no-one
+nobody
+non
+none
+nonetheless
+noone
+nor
+normally
+nos
+not
+noted
+nothing
+notwithstanding
+novel
+now
+nowhere
+np
+nr
+nu
+null
+number
+numbers
+nz
+o
+obtain
+obtained
+obviously
+of
+off
+often
+oh
+ok
+okay
+old
+older
+oldest
+om
+omitted
+on
+once
+one
+one's
+ones
+only
+onto
+open
+opened
+opening
+opens
+opposite
+or
+ord
+order
+ordered
+ordering
+orders
+org
+other
+others
+otherwise
+ought
+oughtn't
+oughtnt
+our
+ours
+ourselves
+out
+outside
+over
+overall
+owing
+own
+p
+pa
+page
+pages
+part
+parted
+particular
+particularly
+parting
+parts
+past
+pe
+per
+perhaps
+pf
+pg
+ph
+pk
+pl
+place
+placed
+places
+please
+plus
+pm
+pmid
+pn
+point
+pointed
+pointing
+points
+poorly
+possible
+possibly
+potentially
+pp
+pr
+predominantly
+present
+presented
+presenting
+presents
+presumably
+previously
+primarily
+probably
+problem
+problems
+promptly
+proud
+provided
+provides
+pt
+put
+puts
+pw
+py
+q
+qa
+que
+quickly
+quite
+qv
+r
+ran
+rather
+rd
+re
+readily
+really
+reasonably
+recent
+recently
+ref
+refs
+regarding
+regardless
+regards
+related
+relatively
+research
+reserved
+respectively
+resulted
+resulting
+results
+right
+ring
+ro
+room
+rooms
+round
+ru
+run
+rw
+s
+sa
+said
+same
+saw
+say
+saying
+says
+sb
+sc
+sd
+se
+sec
+second
+secondly
+seconds
+section
+see
+seeing
+seem
+seemed
+seeming
+seems
+seen
+sees
+self
+selves
+sensible
+sent
+serious
+seriously
+seven
+seventy
+several
+sg
+sh
+shall
+shan't
+shant
+she
+she'd
+she'll
+she's
+shed
+shell
+shes
+should
+should've
+shouldn
+shouldn't
+shouldnt
+show
+showed
+showing
+shown
+showns
+shows
+si
+side
+sides
+significant
+significantly
+similar
+similarly
+since
+sincere
+site
+six
+sixty
+sj
+sk
+sl
+slightly
+sm
+small
+smaller
+smallest
+sn
+so
+some
+somebody
+someday
+somehow
+someone
+somethan
+something
+sometime
+sometimes
+somewhat
+somewhere
+soon
+sorry
+specifically
+specified
+specify
+specifying
+sr
+st
+state
+states
+still
+stop
+strongly
+su
+sub
+substantially
+successfully
+such
+sufficiently
+suggest
+sup
+sure
+sv
+sy
+system
+sz
+t
+t's
+take
+taken
+taking
+tc
+td
+tell
+ten
+tends
+test
+text
+tf
+tg
+th
+than
+thank
+thanks
+thanx
+that
+that'll
+that's
+that've
+thatll
+thats
+thatve
+the
+their
+theirs
+them
+themselves
+then
+thence
+there
+there'd
+there'll
+there're
+there's
+there've
+thereafter
+thereby
+thered
+therefore
+therein
+therell
+thereof
+therere
+theres
+thereto
+thereupon
+thereve
+these
+they
+they'd
+they'll
+they're
+they've
+theyd
+theyll
+theyre
+theyve
+thick
+thin
+thing
+things
+think
+thinks
+third
+thirty
+this
+thorough
+thoroughly
+those
+thou
+though
+thoughh
+thought
+thoughts
+thousand
+three
+throug
+through
+throughout
+thru
+thus
+til
+till
+tip
+tis
+tj
+tk
+tm
+tn
+to
+today
+together
+too
+took
+top
+toward
+towards
+tp
+tr
+tried
+tries
+trillion
+truly
+try
+trying
+ts
+tt
+turn
+turned
+turning
+turns
+tv
+tw
+twas
+twelve
+twenty
+twice
+two
+tz
+u
+ua
+ug
+uk
+um
+un
+under
+underneath
+undoing
+unfortunately
+unless
+unlike
+unlikely
+until
+unto
+up
+upon
+ups
+upwards
+us
+use
+used
+useful
+usefully
+usefulness
+uses
+using
+usually
+uucp
+uy
+uz
+v
+va
+value
+various
+vc
+ve
+versus
+very
+vg
+vi
+via
+viz
+vn
+vol
+vols
+vs
+vu
+w
+want
+wanted
+wanting
+wants
+was
+wasn
+wasn't
+wasnt
+way
+ways
+we
+we'd
+we'll
+we're
+we've
+web
+webpage
+website
+wed
+welcome
+well
+wells
+went
+were
+weren
+weren't
+werent
+weve
+wf
+what
+what'd
+what'll
+what's
+what've
+whatever
+whatll
+whats
+whatve
+when
+when'd
+when'll
+when's
+whence
+whenever
+where
+where'd
+where'll
+where's
+whereafter
+whereas
+whereby
+wherein
+wheres
+whereupon
+wherever
+whether
+which
+whichever
+while
+whilst
+whim
+whither
+who
+who'd
+who'll
+who's
+whod
+whoever
+whole
+wholl
+whom
+whomever
+whos
+whose
+why
+why'd
+why'll
+why's
+widely
+width
+will
+willing
+wish
+with
+within
+without
+won
+won't
+wonder
+wont
+words
+work
+worked
+working
+works
+world
+would
+would've
+wouldn
+wouldn't
+wouldnt
+ws
+www
+x
+y
+ye
+year
+years
+yes
+yet
+you
+you'd
+you'll
+you're
+you've
+youd
+youll
+young
+younger
+youngest
+your
+youre
+yours
+yourself
+yourselves
+youve
+yt
+yu
+z
+za
+zero
+zm
+zr
+
+# Additional specific stop words specific to POD and sample problem documentation.
+constructor
+description
+error
+errors
+macro
+macros
+pod
+podlink
+problink
+synopsis
+usage
+funciton
+functions
+method
+methods
+option
+options
+todo
+fixme
+_
diff --git a/bin/generate-pg-pod.pl b/bin/generate-pg-pod.pl
new file mode 100755
index 0000000000..98f73a3458
--- /dev/null
+++ b/bin/generate-pg-pod.pl
@@ -0,0 +1,80 @@
+#!/usr/bin/env perl
+
+=head1 NAME
+
+generate-pg-pod.pl - Convert PG POD into HTML form.
+
+=head1 SYNOPSIS
+
+generate-pg-pod.pl [options]
+
+ Options:
+ -o|--output-dir Directory to save the output files to. (required)
+ -b|--base-url Base url location used on server. (default: /)
+ This is needed for internal POD links to work correctly.
+ -h|--home-url Home page url on the server. (default: /)
+ -v|--verbose Increase the verbosity of the output.
+ (Use multiple times for more verbosity.)
+
+=head1 DESCRIPTION
+
+Convert PG POD into HTML form.
+
+=cut
+
+use strict;
+use warnings;
+
+use Getopt::Long qw(:config bundling);
+use Pod::Usage;
+
+my ($output_dir, $base_url, $home_url);
+my $verbose = 0;
+GetOptions(
+ 'o|output-dir=s' => \$output_dir,
+ 'b|base-url=s' => \$base_url,
+ 'h|home-url=s' => \$home_url,
+ 'v|verbose+' => \$verbose
+);
+
+pod2usage(2) unless $output_dir;
+
+$base_url = "/" if !$base_url;
+$home_url = "/" if !$home_url;
+
+use Mojo::Template;
+use IO::File;
+use File::Copy;
+use File::Path qw(make_path remove_tree);
+use File::Basename qw(dirname);
+use Cwd qw(abs_path);
+
+use lib abs_path(dirname(dirname(__FILE__))) . '/lib';
+
+use WeBWorK::Utils::PODtoHTML;
+
+my $pg_root = abs_path(dirname(dirname(__FILE__)));
+
+print "Reading: $pg_root\n" if $verbose;
+
+remove_tree($output_dir);
+make_path($output_dir);
+
+my $htmldocs = WeBWorK::Utils::PODtoHTML->new(
+ source_root => $pg_root,
+ dest_root => $output_dir,
+ template_dir => "$pg_root/assets/pod-templates",
+ dest_url => $base_url,
+ home_url => $home_url,
+ home_url_link_name => 'PG Documentation Home',
+ verbose => $verbose
+);
+$htmldocs->convert_pods;
+
+make_path("$output_dir/assets");
+copy("$pg_root/htdocs/js/PODViewer/podviewer.css", "$output_dir/assets/podviewer.css");
+print "copying $pg_root/htdocs/js/PODViewer/podviewer.css to $output_dir/assets/podviewer.css\n" if $verbose;
+copy("$pg_root/htdocs/js/PODViewer/podviewer.js", "$output_dir/assets/podviewer.js");
+print "copying $pg_root/htdocs/js/PODViewer/podviewer.css to $output_dir/assets/podviewer.js\n" if $verbose;
+
+1;
diff --git a/bin/generate-search-data.pl b/bin/generate-search-data.pl
new file mode 100755
index 0000000000..c3ab227eec
--- /dev/null
+++ b/bin/generate-search-data.pl
@@ -0,0 +1,42 @@
+#!/usr/bin/env perl
+
+=head1 NAME
+
+generate-search-data.pl - Generate search data for macro and sample problem
+documentation.
+
+=head1 SYNOPSIS
+
+generate-search-data.pl [options]
+
+ Options:
+ -o|--out-file File to save the search data to. (required)
+
+=head1 DESCRIPTION
+
+Generate search data for macro and sample problem documentation.
+
+=cut
+
+use strict;
+use warnings;
+
+my $pgRoot;
+
+use Mojo::File qw(curfile);
+BEGIN { $pgRoot = curfile->dirname->dirname; }
+
+use lib "$pgRoot/lib";
+
+use Getopt::Long;
+use Pod::Usage;
+
+use WeBWorK::PG::SampleProblemParser qw(getSearchData);
+
+my $outFile;
+GetOptions("o|out-file=s" => \$outFile);
+pod2usage(2) unless $outFile;
+
+getSearchData($outFile);
+
+1;
diff --git a/bin/parse-problem-doc.pl b/bin/parse-problem-doc.pl
index b2274b198a..818416093d 100755
--- a/bin/parse-problem-doc.pl
+++ b/bin/parse-problem-doc.pl
@@ -1,5 +1,29 @@
#!/usr/bin/env perl
+=head1 NAME
+
+parse-problem-doc.pl - Parse sample problem documentation.
+
+=head1 SYNOPSIS
+
+parse-problem-doc.pl [options]
+
+ Options:
+ -d|--problem-dir Directory containing sample problems to be parsed.
+ This defaults to the tutorial/sample-problems directory
+ in the PG root directory if not given.
+ -o|--out-dir Directory to save the output files to. (required)
+ -p|--pod-base-url Base URL location for POD on server. (required)
+ -s|--sample-problem-base-url
+ Base URL location for sample problems on server. (required)
+ -v|--verbose Give verbose feedback.
+
+=head1 DESCRIPTION
+
+Parse sample problem documentation.
+
+=cut
+
use strict;
use warnings;
use experimental 'signatures';
@@ -17,26 +41,26 @@ BEGIN
use Mojo::Template;
use File::Basename qw(basename);
use Getopt::Long;
+use Pod::Usage;
use File::Copy qw(copy);
use Pod::Simple::Search;
-use SampleProblemParser qw(parseSampleProblem generateMetadata);
+use WeBWorK::PG::SampleProblemParser qw(parseSampleProblem generateMetadata);
my $problem_dir = "$pg_root/tutorial/sample-problems";
-my ($out_dir, $pod_root, $pg_doc_home);
+my ($out_dir, $pod_base_url, $sample_problem_base_url);
my $verbose = 0;
GetOptions(
- "d|problem_dir=s" => \$problem_dir,
- "o|out_dir=s" => \$out_dir,
- "v|verbose" => \$verbose,
- "p|pod_root=s" => \$pod_root,
- "h|pg_doc_home=s" => \$pg_doc_home,
+ "d|problem-dir=s" => \$problem_dir,
+ "o|out-dir=s" => \$out_dir,
+ "p|pod-base-url=s" => \$pod_base_url,
+ "s|sample-problem-base-url=s" => \$sample_problem_base_url,
+ "v|verbose" => \$verbose
);
-die "out_dir, pod_root, and pg_doc_home must be provided.\n"
- unless $out_dir && $pod_root && $pg_doc_home;
+pod2usage(2) unless $out_dir && $pod_base_url && $sample_problem_base_url;
my $mt = Mojo::Template->new(vars => 1);
my $template_dir = "$pg_root/tutorial/templates";
@@ -46,7 +70,7 @@ BEGIN
my @problem_types = qw(sample technique snippet);
-$pod_root .= '/pg/macros';
+$pod_base_url .= '/macros';
mkdir $out_dir unless -d $out_dir;
# Build a hash of all PG files for linking.
@@ -55,16 +79,16 @@ BEGIN
for (keys %$index_table) {
renderSampleProblem(
$_ =~ s/.pg$//r,
- metadata => $index_table,
- macro_locations => $macro_locations,
- pod_root => $pod_root,
- pg_doc_home => $pg_doc_home,
- url_extension => '.html',
- problem_dir => $problem_dir,
- out_dir => $out_dir,
- template_dir => $template_dir,
- mt => $mt,
- verbose => $verbose
+ metadata => $index_table,
+ macro_locations => $macro_locations,
+ pod_base_url => $pod_base_url,
+ sample_problem_base_url => $sample_problem_base_url,
+ url_extension => '.html',
+ problem_dir => $problem_dir,
+ out_dir => $out_dir,
+ template_dir => $template_dir,
+ mt => $mt,
+ verbose => $verbose
);
}
diff --git a/htdocs/js/PODViewer/podviewer.css b/htdocs/js/PODViewer/podviewer.css
new file mode 100644
index 0000000000..e4f17811d2
--- /dev/null
+++ b/htdocs/js/PODViewer/podviewer.css
@@ -0,0 +1,66 @@
+.main-index-header,
+.pod-header {
+ height: 65px;
+ top: 0;
+ left: 0;
+ right: 0;
+ z-index: 2;
+}
+
+#sidebar {
+ --bs-offcanvas-width: 300px;
+ overflow-y: auto;
+}
+
+#sidebar ul.nav ul.nav li {
+ border-left: 1px solid #e1e4e8;
+ padding-left: 10px;
+}
+
+#sidebar ul.nav ul.nav li:hover {
+ border-left: 6px solid #e1e4e8;
+ padding-left: 5px;
+}
+
+.main-index-container,
+.pod-page-container {
+ margin-top: 65px;
+}
+
+@media only screen and (min-width: 768px) {
+ #sidebar {
+ height: calc(100vh - 65px);
+ width: 300px;
+ }
+
+ .pod-page-container {
+ margin-left: 300px;
+ }
+}
+
+#_podtop_ pre {
+ border: 1px solid #ccc;
+ border-radius: 5px;
+ background: #f6f6f6;
+ padding: 0.75rem;
+}
+
+#_podtop_,
+#_podtop_ *[id] {
+ scroll-margin-top: calc(65px + 1rem);
+}
+
+@media only screen and (max-width: 768px) {
+ .pod-header {
+ height: 100px;
+ }
+
+ .pod-page-container {
+ margin-top: 100px;
+ }
+
+ #_podtop_,
+ #_podtop_ *[id] {
+ scroll-margin-top: calc(100px + 1rem);
+ }
+}
diff --git a/htdocs/js/PODViewer/podviewer.js b/htdocs/js/PODViewer/podviewer.js
new file mode 100644
index 0000000000..795093205a
--- /dev/null
+++ b/htdocs/js/PODViewer/podviewer.js
@@ -0,0 +1,8 @@
+(() => {
+ const offcanvas = bootstrap.Offcanvas.getOrCreateInstance(document.getElementById('sidebar'));
+ for (const link of document.querySelectorAll('#sidebar .nav-link')) {
+ // The timeout is to workaround an issue in Chrome. If the offcanvas hides before the window scrolls to the
+ // fragment in the page, scrolling stops before it gets there.
+ link.addEventListener('click', () => setTimeout(() => offcanvas.hide(), 500));
+ }
+})();
diff --git a/htdocs/js/SampleProblemViewer/documentation-search.js b/htdocs/js/SampleProblemViewer/documentation-search.js
new file mode 100644
index 0000000000..e43c878ef7
--- /dev/null
+++ b/htdocs/js/SampleProblemViewer/documentation-search.js
@@ -0,0 +1,78 @@
+(async () => {
+ const searchBox = document.getElementById('search-box');
+ const resultList = document.getElementById('result-list');
+ if (!resultList || !searchBox) return;
+
+ const rootURL = window.pgDocConfig?.rootURL ?? '.';
+ const htmlSuffixMutation = window.pgDocConfig?.htmlSuffixMutation ?? [/\.p[gl]$/, '.html'];
+ const searchDataURL = window.pgDocConfig?.searchDataURL ?? 'sample-problem-search-data.json';
+
+ let searchData;
+ try {
+ const result = await fetch(searchDataURL);
+ searchData = await result.json();
+ } catch (e) {
+ console.log(e);
+ return;
+ }
+
+ const miniSearch = new MiniSearch({
+ fields: ['filename', 'name', 'description', 'terms', 'macros', 'subjects'],
+ storeFields: ['type', 'filename', 'dir', 'description']
+ });
+ miniSearch.addAll(searchData);
+
+ const searchMacrosCheck = document.getElementById('search-macros');
+ const searchSampleProblemsCheck = document.getElementById('search-sample-problems');
+
+ document.getElementById('clear-search-button')?.addEventListener('click', () => {
+ searchBox.value = '';
+ while (resultList.firstChild) resultList.firstChild.remove();
+ });
+
+ const searchDocumentation = () => {
+ const searchMacros = searchMacrosCheck?.checked;
+ const searchSampleProblems = searchSampleProblemsCheck?.checked;
+
+ while (resultList.firstChild) resultList.firstChild.remove();
+
+ if (!searchBox.value) return;
+
+ for (const result of miniSearch.search(searchBox.value, { prefix: true })) {
+ if (
+ (searchSampleProblems && result.type === 'sample problem') ||
+ (searchMacros && result.type === 'macro')
+ ) {
+ const link = document.createElement('a');
+ link.classList.add('list-group-item', 'list-group-item-action');
+ link.href = `${rootURL}/${
+ result.type === 'sample problem' ? 'sampleproblems' : result.type === 'macro' ? 'pod' : ''
+ }/${result.dir}/${result.filename.replace(...htmlSuffixMutation)}`;
+
+ const linkText = document.createElement('span');
+ linkText.classList.add('h4');
+ linkText.textContent = `${result.filename} (${result.type})`;
+ link.append(linkText);
+
+ if (result.description) {
+ const summary = document.createElement('div');
+ summary.textContent = result.description;
+ link.append(summary);
+ }
+
+ resultList.append(link);
+ }
+ }
+
+ if (resultList.children.length == 0) {
+ const item = document.createElement('div');
+ item.classList.add('alert', 'alert-info');
+ item.innerHTML = 'No results found';
+ resultList.append(item);
+ }
+ };
+
+ searchBox.addEventListener('keyup', searchDocumentation);
+ searchMacrosCheck?.addEventListener('change', searchDocumentation);
+ searchSampleProblemsCheck?.addEventListener('change', searchDocumentation);
+})();
diff --git a/lib/AnswerHash.pm b/lib/AnswerHash.pm
old mode 100755
new mode 100644
diff --git a/lib/PGcore.pm b/lib/PGcore.pm
old mode 100755
new mode 100644
diff --git a/lib/SampleProblemParser.pm b/lib/SampleProblemParser.pm
deleted file mode 100644
index 30b5f8398b..0000000000
--- a/lib/SampleProblemParser.pm
+++ /dev/null
@@ -1,253 +0,0 @@
-package SampleProblemParser;
-use parent qw(Exporter);
-
-use strict;
-use warnings;
-use experimental 'signatures';
-use feature 'say';
-
-use File::Basename qw(dirname basename);
-use File::Find qw(find);
-use Pandoc;
-
-our @EXPORT_OK = qw(parseSampleProblem generateMetadata getSampleProblemCode);
-
-=head1 NAME
-
-SampleProblemParser - Parse the documentation in a sample problem in the /doc
-directory.
-
-=head2 C
-
-Parse a PG file with extra documentation comments. The input is the file and a
-hash of global variables:
-
-=over
-
-=item C: A reference to a hash which has information (name, directory,
-types, subjects, categories) of every sample problem file.
-
-=item C: A reference to a hash of macros to include as links
-within a problem.
-
-=item C: The root directory of the POD.
-
-=item C: The url of the pg_doc home.
-
-=item C: The html url extension (including the dot) to use for pg
-doc links. The default is the empty string.
-
-=back
-
-=cut
-
-sub parseSampleProblem ($file, %global) {
- my $filename = basename($file);
- open(my $FH, '<:encoding(UTF-8)', $file) or do {
- warn qq{Could not open file "$file": $!};
- return {};
- };
- my @file_contents = <$FH>;
- close $FH;
-
- my (@blocks, @doc_rows, @code_rows, @description);
- my (%options, $descr, $type, $name);
-
- $global{url_extension} //= '';
-
- while (my $row = shift @file_contents) {
- chomp($row);
- $row =~ s/\t/ /g;
- if ($row =~ /^#:%\s*(categor(y|ies)|types?|subjects?|see_also|name)\s*=\s*(.*)\s*$/) {
- # skip this, already parsed.
- } elsif ($row =~ /^#:%\s*(.*)?/) {
- # The row has the form #:% section = NAME.
- # This should parse the previous named section and then reset @doc_rows and @code_rows.
- push(
- @blocks,
- {
- %options,
- doc => pandoc->convert(markdown => 'html', join("\n", @doc_rows)),
- code => join("\n", @code_rows)
- }
- ) if %options;
- %options = split(/\s*:\s*|\s*,\s*|\s*=\s*|\s+/, $1);
- @doc_rows = ();
- @code_rows = ();
- } elsif ($row =~ /^#:/) {
- # This section is documentation to be parsed.
- $row = $row =~ s/^#:\s?//r;
-
- # Parse any LINK/PODLINK/PROBLINK commands in the documentation.
- if ($row =~ /(POD|PROB)?LINK\('(.*?)'\s*(,\s*'(.*)')?\)/) {
- my $link_text = defined($1) ? $1 eq 'POD' ? $2 : $global{metadata}{$2}{name} : $2;
- my $url =
- defined($1)
- ? $1 eq 'POD'
- ? "$global{pod_root}/" . $global{macro_locations}{ $4 // $2 }
- : "$global{pg_doc_home}/$global{metadata}{$2}{dir}/" . ($2 =~ s/.pg$/$global{url_extension}/r)
- : $4;
- $row = $row =~ s/(POD|PROB)?LINK\('(.*?)'\s*(,\s*'(.*)')?\)/[$link_text]($url)/gr;
- }
-
- push(@doc_rows, $row);
- } elsif ($row =~ /^##\s*(END)?DESCRIPTION\s*$/) {
- $descr = $1 ? 0 : 1;
- } elsif ($row =~ /^##/ && $descr) {
- push(@description, $row =~ s/^##\s*//r);
- push(@code_rows, $row);
- } else {
- push(@code_rows, $row);
- }
- }
-
- # The last @doc_rows must be parsed then added to the @blocks.
- push(
- @blocks,
- {
- %options,
- doc => pandoc->convert(markdown => 'html', join("\n", @doc_rows)),
- code => join("\n", @code_rows)
- }
- );
-
- return {
- name => $global{metadata}{$filename}{name},
- blocks => \@blocks,
- code => join("\n", map { $_->{code} } @blocks),
- description => join("\n", @description)
- };
-}
-
-=head2 C
-
-Build a hash of metadata for all PG files in the given directory. A reference
-to the hash that is built is returned.
-
-=cut
-
-sub generateMetadata ($problem_dir, %options) {
- my $index_table = {};
-
- find(
- {
- wanted => sub {
- say "Reading file: $File::Find::name" if $options{verbose};
-
- if ($File::Find::name =~ /\.pg$/) {
- my $metadata = parseMetadata($File::Find::name, $problem_dir);
- unless (@{ $metadata->{types} }) {
- warn "The type of sample problem is missing for $File::Find::name.";
- return;
- }
- unless ($metadata->{name}) {
- warn "The name attribute is missing for $File::Find::name.";
- return;
- }
- $index_table->{ basename($File::Find::name) } = $metadata;
- }
- }
- },
- $problem_dir
- );
-
- return $index_table;
-}
-
-my @macros_to_skip = qw(
- PGML.pl
- PGcourse.pl
- PGstandard.pl
-);
-
-sub parseMetadata ($path, $problem_dir) {
- open(my $FH, '<:encoding(UTF-8)', $path) or do {
- warn qq{Could not open file "$path": $!};
- return {};
- };
- my @file_contents = <$FH>;
- close $FH;
-
- my @problem_types = qw(sample technique snippet);
-
- my $metadata = { dir => (dirname($path) =~ s/$problem_dir\/?//r) =~ s/\/*$//r };
-
- while (my $row = shift @file_contents) {
- if ($row =~ /^#:%\s*(categor(y|ies)|types?|subjects?|see_also|name)\s*=\s*(.*)\s*$/) {
- # The row has the form #:% categories = [cat1, cat2, ...].
- my $label = lc($1);
- my @opts = $3 =~ /\[(.*)\]/ ? map { $_ =~ s/^\s*|\s*$//r } split(/,/, $1) : ($3);
- if ($label =~ /types?/) {
- for my $opt (@opts) {
- warn "The type of problem must be one of @problem_types"
- unless grep { lc($opt) eq $_ } @problem_types;
- }
- $metadata->{types} = [ map { lc($_) } @opts ];
- } elsif ($label =~ /^categor/) {
- $metadata->{categories} = \@opts;
- } elsif ($label =~ /^subject/) {
- $metadata->{subjects} = [ map { lc($_) } @opts ];
- } elsif ($label eq 'name') {
- $metadata->{name} = $opts[0];
- } elsif ($label eq 'see_also') {
- $metadata->{related} = \@opts;
- }
- } elsif ($row =~ /loadMacros\(/) {
- chomp($row);
- # Parse the macros, which may be on multiple rows.
- my $macros = $row;
- while ($row && $row !~ /\);\s*$/) {
- $row = shift @file_contents;
- chomp($row);
- $macros .= $row;
- }
- # Split by commas and pull out the quotes.
- my @macros = map {s/['"\s]//gr} split(/\s*,\s*/, $macros =~ s/loadMacros\((.*)\)\;$/$1/r);
- $metadata->{macros} = [];
- for my $macro (@macros) {
- push(@{ $metadata->{macros} }, $macro) unless grep { $_ eq $macro } @macros_to_skip;
- }
- }
- }
-
- return $metadata;
-}
-
-=head2 C
-
-Parse a PG file with extra documentation comments and strip that all out
-returning the clean problem code. This returns the same code that the
-C returns, except at much less expense as it does not parse
-the documentation, it does not require that the metadata be parsed first, and it
-does not need macro POD information.
-
-=cut
-
-sub getSampleProblemCode ($file) {
- my $filename = basename($file);
- open(my $FH, '<:encoding(UTF-8)', $file) or do {
- warn qq{Could not open file "$file": $!};
- return '';
- };
- my @file_contents = <$FH>;
- close $FH;
-
- my (@code_rows, $inCode);
-
- while (my $row = shift @file_contents) {
- chomp($row);
- $row =~ s/\t/ /g;
- if ($row =~ /^#:(.*)?/) {
- # This is documentation so skip it.
- } elsif ($row =~ /^\s*(END)?DOCUMENT.*$/) {
- $inCode = $1 ? 0 : 1;
- push(@code_rows, $row);
- } elsif ($inCode) {
- push(@code_rows, $row);
- }
- }
-
- return join("\n", @code_rows);
-}
-
-1;
diff --git a/lib/WeBWorK/PG/SampleProblemParser.pm b/lib/WeBWorK/PG/SampleProblemParser.pm
new file mode 100644
index 0000000000..42fd73ffc3
--- /dev/null
+++ b/lib/WeBWorK/PG/SampleProblemParser.pm
@@ -0,0 +1,474 @@
+package WeBWorK::PG::SampleProblemParser;
+use parent qw(Exporter);
+
+use strict;
+use warnings;
+use experimental 'signatures';
+use feature 'say';
+
+my $pgRoot;
+
+use Mojo::File qw(curfile);
+BEGIN { $pgRoot = curfile->dirname->dirname->dirname->dirname; }
+
+use File::Basename qw(dirname basename);
+use File::Find qw(find);
+use Mojo::File qw(path);
+use Mojo::JSON qw(decode_json encode_json);
+use Pandoc;
+use Pod::Simple::Search;
+use Pod::Simple::SimpleTree;
+
+our @EXPORT_OK = qw(parseSampleProblem generateMetadata getSampleProblemCode getSearchData);
+
+=head1 NAME
+
+WeBWorK::PG::SampleProblemParser - Parse sample problems and extract metadata,
+documentation, and code.
+
+=head2 parseSampleProblem
+
+Parse a PG file with extra documentation comments. The input is the file and a
+hash of global variables:
+
+=over
+
+=item *
+
+C: A reference to a hash which has information (name, directory,
+types, subjects, categories) of every sample problem file.
+
+=item *
+
+C: A reference to a hash of macros to include as links within a
+problem.
+
+=item *
+
+C: The base URL for the POD HTML files.
+
+=item *
+
+C: The base URL for the sample problem HTML files.
+
+=item *
+
+C: The html url extension (including the dot) to use for pg doc
+links. The default is the empty string.
+
+=back
+
+=cut
+
+sub parseSampleProblem ($file, %global) {
+ my $filename = basename($file);
+ open(my $FH, '<:encoding(UTF-8)', $file) or do {
+ warn qq{Could not open file "$file": $!};
+ return {};
+ };
+ my @file_contents = <$FH>;
+ close $FH;
+
+ my (@blocks, @doc_rows, @code_rows, @description);
+ my (%options, $descr, $type, $name);
+
+ $global{url_extension} //= '';
+
+ while (my $row = shift @file_contents) {
+ chomp($row);
+ $row =~ s/\t/ /g;
+ if ($row =~ /^#:%\s*(categor(y|ies)|types?|subjects?|see_also|name)\s*=\s*(.*)\s*$/) {
+ # skip this, already parsed.
+ } elsif ($row =~ /^#:%\s*(.*)?/) {
+ # The row has the form #:% section = NAME.
+ # This should parse the previous named section and then reset @doc_rows and @code_rows.
+ push(
+ @blocks,
+ {
+ %options,
+ doc => pandoc->convert(markdown => 'html', join("\n", @doc_rows)),
+ code => join("\n", @code_rows)
+ }
+ ) if %options;
+ %options = split(/\s*:\s*|\s*,\s*|\s*=\s*|\s+/, $1);
+ @doc_rows = ();
+ @code_rows = ();
+ } elsif ($row =~ /^#:/) {
+ # This section is documentation to be parsed.
+ $row = $row =~ s/^#:\s?//r;
+
+ # Parse any LINK/PODLINK/PROBLINK commands in the documentation.
+ if ($row =~ /(POD|PROB)?LINK\('(.*?)'\s*(,\s*'(.*)')?\)/) {
+ my $link_text = defined($1) ? $1 eq 'POD' ? $2 : $global{metadata}{$2}{name} : $2;
+ my $url =
+ defined($1)
+ ? $1 eq 'POD'
+ ? "$global{pod_base_url}/" . $global{macro_locations}{ $4 // $2 }
+ : "$global{sample_problem_base_url}/$global{metadata}{$2}{dir}/"
+ . ($2 =~ s/.pg$/$global{url_extension}/r)
+ : $4;
+ $row = $row =~ s/(POD|PROB)?LINK\('(.*?)'\s*(,\s*'(.*)')?\)/[$link_text]($url)/gr;
+ }
+
+ push(@doc_rows, $row);
+ } elsif ($row =~ /^##\s*(END)?DESCRIPTION\s*$/) {
+ $descr = $1 ? 0 : 1;
+ } elsif ($row =~ /^##/ && $descr) {
+ push(@description, $row =~ s/^##\s*//r);
+ push(@code_rows, $row);
+ } else {
+ push(@code_rows, $row);
+ }
+ }
+
+ # The last @doc_rows must be parsed then added to the @blocks.
+ push(
+ @blocks,
+ {
+ %options,
+ doc => pandoc->convert(markdown => 'html', join("\n", @doc_rows)),
+ code => join("\n", @code_rows)
+ }
+ );
+
+ return {
+ name => $global{metadata}{$filename}{name},
+ blocks => \@blocks,
+ code => join("\n", map { $_->{code} } @blocks),
+ description => join("\n", @description)
+ };
+}
+
+=head2 generateMetadata
+
+Build a hash of metadata for all PG files in the given directory. A reference
+to the hash that is built is returned.
+
+=cut
+
+sub generateMetadata ($problem_dir, %options) {
+ my $index_table = {};
+
+ find(
+ {
+ wanted => sub {
+ say "Reading file: $File::Find::name" if $options{verbose};
+
+ if ($File::Find::name =~ /\.pg$/) {
+ my $metadata = parseMetadata($File::Find::name, $problem_dir);
+ unless (@{ $metadata->{types} }) {
+ warn "The type of sample problem is missing for $File::Find::name.";
+ return;
+ }
+ unless ($metadata->{name}) {
+ warn "The name attribute is missing for $File::Find::name.";
+ return;
+ }
+ $index_table->{ basename($File::Find::name) } = $metadata;
+ }
+ }
+ },
+ $problem_dir
+ );
+
+ return $index_table;
+}
+
+my @macros_to_skip = qw(
+ PGML.pl
+ PGcourse.pl
+ PGstandard.pl
+);
+
+sub parseMetadata ($path, $problem_dir) {
+ open(my $FH, '<:encoding(UTF-8)', $path) or do {
+ warn qq{Could not open file "$path": $!};
+ return {};
+ };
+ my @file_contents = <$FH>;
+ close $FH;
+
+ my @problem_types = qw(sample technique snippet);
+
+ my $metadata = { dir => (dirname($path) =~ s/$problem_dir\/?//r) =~ s/\/*$//r };
+
+ while (my $row = shift @file_contents) {
+ if ($row =~ /^#:%\s*(categor(y|ies)|types?|subjects?|see_also|name)\s*=\s*(.*)\s*$/) {
+ # The row has the form #:% categories = [cat1, cat2, ...].
+ my $label = lc($1);
+ my @opts = $3 =~ /\[(.*)\]/ ? map { $_ =~ s/^\s*|\s*$//r } split(/,/, $1) : ($3);
+ if ($label =~ /types?/) {
+ for my $opt (@opts) {
+ warn "The type of problem must be one of @problem_types"
+ unless grep { lc($opt) eq $_ } @problem_types;
+ }
+ $metadata->{types} = [ map { lc($_) } @opts ];
+ } elsif ($label =~ /^categor/) {
+ $metadata->{categories} = \@opts;
+ } elsif ($label =~ /^subject/) {
+ $metadata->{subjects} = [ map { lc($_) } @opts ];
+ } elsif ($label eq 'name') {
+ $metadata->{name} = $opts[0];
+ } elsif ($label eq 'see_also') {
+ $metadata->{related} = \@opts;
+ }
+ } elsif ($row =~ /loadMacros\(/) {
+ chomp($row);
+ # Parse the macros, which may be on multiple rows.
+ my $macros = $row;
+ while ($row && $row !~ /\);\s*$/) {
+ $row = shift @file_contents;
+ chomp($row);
+ $macros .= $row;
+ }
+ # Split by commas and pull out the quotes.
+ my @macros = map {s/['"\s]//gr} split(/\s*,\s*/, $macros =~ s/loadMacros\((.*)\)\;$/$1/r);
+ $metadata->{macros} = [];
+ for my $macro (@macros) {
+ push(@{ $metadata->{macros} }, $macro) unless grep { $_ eq $macro } @macros_to_skip;
+ }
+ }
+ }
+
+ return $metadata;
+}
+
+=head2 getSampleProblemCode
+
+Parse a PG file with extra documentation comments and strip that all out
+returning the clean problem code. This returns the same code that
+C returns, except at much less expense as it does not parse
+the documentation, it does not require that the metadata be parsed first, and it
+does not need macro POD information.
+
+=cut
+
+sub getSampleProblemCode ($file) {
+ my $filename = basename($file);
+ open(my $FH, '<:encoding(UTF-8)', $file) or do {
+ warn qq{Could not open file "$file": $!};
+ return '';
+ };
+ my @file_contents = <$FH>;
+ close $FH;
+
+ my (@code_rows, $inCode);
+
+ while (my $row = shift @file_contents) {
+ chomp($row);
+ $row =~ s/\t/ /g;
+ if ($row =~ /^#:(.*)?/) {
+ # This is documentation so skip it.
+ } elsif ($row =~ /^\s*(END)?DOCUMENT.*$/) {
+ $inCode = $1 ? 0 : 1;
+ push(@code_rows, $row);
+ } elsif ($inCode) {
+ push(@code_rows, $row);
+ }
+ }
+
+ return join("\n", @code_rows);
+}
+
+=head2 getSearchData
+
+Generate search data for sample problem files and macro POD. The only argument
+is required and should be a file name to write the search data to. If the file
+does not exist, then a new file containing the generated search data will be
+written. If the file exists and contains search data from previously using this
+method, then the data will be updated based on file modification times of the
+sample problem files and macros. In any case an array reference containing the
+generated search data will be returned.
+
+=cut
+
+my $stopWordsCache;
+
+sub getSearchData ($searchDataFileName) {
+ my $searchDataFile = path($searchDataFileName);
+ my %files = map { $_->{filename} => $_ } @{ (eval { decode_json($searchDataFile->slurp('UTF-8')) } // []) };
+ my @updatedFiles;
+
+ my $stopWords = sub ($word) {
+ return $stopWordsCache->{$word} if $stopWordsCache;
+ $stopWordsCache = {};
+
+ my $contents = eval { path("$pgRoot/assets/stop-words-en.txt")->slurp('UTF-8') };
+ return $stopWordsCache if $@;
+
+ for my $line (split("\n", $contents)) {
+ chomp $line;
+ next if $line =~ /^#/ || !$line;
+ $stopWordsCache->{$line} = 1;
+ }
+
+ return $stopWordsCache->{$word};
+ };
+
+ my $processLine = sub ($line) {
+ my %words;
+
+ # Extract linked macros and problems.
+ my @linkedFiles = $line =~ /(?:PODLINK|PROBLINK)\('([\w.]+)'\)/g;
+ $words{$_} = 1 for @linkedFiles;
+
+ # Replace any non-word characters with spaces.
+ $line =~ s/\W/ /g;
+
+ for my $word (split(/\s+/, $line)) {
+ next if $word =~ /^\d*$/;
+ $word = lc($word);
+ $words{$word} = 1 if !$stopWords->($word);
+ }
+ return keys %words;
+ };
+
+ # Extract the text for a section from the given POD with a section header title.
+ my $extractHeadText = sub ($root, $title) {
+ my @index = grep { ref($root->[$_]) eq 'ARRAY' && $root->[$_][2] eq $title } 0 .. $#$root;
+ return unless @index == 1;
+
+ my $node = $root->[ $index[0] + 1 ];
+ my $str = '';
+ for (2 .. $#$node) {
+ $str .= ref($node->[$_]) eq 'ARRAY' ? $node->[$_][2] : $node->[$_];
+ }
+ return $str;
+ };
+
+ # Extract terms form POD headers.
+ my $extractHeaders = sub ($root) {
+ my %terms =
+ map { $_ => 1 }
+ grep { $_ && !$stopWords->($_) }
+ map { split(/\s+/, $_) }
+ map { lc($_) =~ s/\W/ /gr }
+ map {
+ grep { !ref($_) }
+ @$_[ 2 .. $#$_ ]
+ } grep { ref($_) eq 'ARRAY' && $_->[0] =~ /^head\d+$/ } @$root;
+ return [ keys %terms ];
+ };
+
+ # Process the sample problems in the sample problem directory.
+ find(
+ {
+ wanted => sub {
+ return unless $_ =~ /\.pg$/;
+
+ my $file = path($File::Find::name);
+ my $lastModified = $file->stat->mtime;
+
+ if ($files{$_}) {
+ push(@updatedFiles, $files{$_});
+ return if $files{$_}{lastModified} >= $lastModified;
+ }
+
+ my @fileContents = eval { split("\n", $file->slurp('UTF-8')) };
+ return if $@;
+
+ if (!$files{$_}) {
+ $files{$_} = {
+ type => 'sample problem',
+ filename => $_,
+ dir => $file->dirname->basename
+ };
+ push(@updatedFiles, $files{$_});
+ }
+ $files{$_}{lastModified} = $lastModified;
+
+ my (%words, @kw, @macros, @subjects, $description);
+
+ while (@fileContents) {
+ my $line = shift @fileContents;
+ if ($line =~ /^#:%\s*(\w+)\s*=\s*(.*)\s*$/) {
+ # Store the name and subjects.
+ $files{$_}{name} = $2 if $1 eq 'name';
+ if ($1 eq 'subject') {
+ @subjects = split(',\s*', $2 =~ s/\[(.*)\]/$1/r);
+ }
+ } elsif ($line =~ /^#:\s*(.*)?/) {
+ my @newWords = $processLine->($1);
+ @words{@newWords} = (1) x @newWords if @newWords;
+ } elsif ($line =~ /loadMacros\(/) {
+ my $macros = $line;
+ while ($line && $line !~ /\);\s*$/) {
+ $line = shift @fileContents;
+ $macros .= $line;
+ }
+ my @usedMacros =
+ map {s/['"\s]//gr} split(/\s*,\s*/, $macros =~ s/loadMacros\((.*)\)\;$/$1/r);
+
+ # Get the macros other than PGML.pl, PGstandard.pl, and PGcourse.pl.
+ for my $m (@usedMacros) {
+ push(@macros, $m) unless $m =~ /^(PGML|PGstandard|PGcourse)\.pl$/;
+ }
+ } elsif ($line =~ /##\s*KEYWORDS\((.*)\)/) {
+ @kw = map {s/^'(.*)'$/$1/r} split(/,\s*/, $1);
+ } elsif ($line =~ /^##\s*DESCRIPTION/) {
+ $line = shift(@fileContents);
+ while ($line && $line !~ /^##\s*ENDDESCRIPTION/) {
+ $description .= ($line =~ s/^##\s+//r) . ' ';
+ $line = shift(@fileContents);
+ }
+ $description =~ s/\s+$//;
+ }
+ }
+
+ $files{$_}{description} = $description;
+ $files{$_}{subjects} = \@subjects;
+ $files{$_}{terms} = [ keys %words ];
+ $files{$_}{keywords} = \@kw;
+ $files{$_}{macros} = \@macros;
+
+ return;
+ }
+ },
+ "$pgRoot/tutorial/sample-problems"
+ );
+
+ # Process the POD in macros in the macros directory.
+ (undef, my $macroFiles) = Pod::Simple::Search->new->inc(0)->survey("$pgRoot/macros");
+ for my $macroFile (sort keys %$macroFiles) {
+ next if $macroFile =~ /deprecated/;
+
+ my $file = path($macroFile);
+ my $fileName = $file->basename;
+ my $lastModified = $file->stat->mtime;
+
+ if ($files{$fileName}) {
+ push(@updatedFiles, $files{$fileName});
+ next if $files{$fileName}{lastModified} >= $lastModified;
+ }
+
+ if (!$files{$fileName}) {
+ $files{$fileName} = {
+ type => 'macro',
+ id => scalar(keys %files) + 1,
+ filename => $fileName,
+ dir => $file->dirname->to_rel($pgRoot)->to_string
+ };
+ push(@updatedFiles, $files{$fileName});
+ }
+ $files{$fileName}{lastModified} = $lastModified;
+
+ my $root = Pod::Simple::SimpleTree->new->parse_file($file->to_string)->root;
+
+ $files{$fileName}{terms} = $extractHeaders->($root);
+
+ if (my $nameDescription = $extractHeadText->($root, 'NAME')) {
+ (undef, my $description) = split(/\s*-\s*/, $nameDescription, 2);
+ $files{$fileName}{description} = $description if $description;
+ }
+ }
+
+ # Re-index in case files were added or removed.
+ my $count = 0;
+ $_->{id} = ++$count for @updatedFiles;
+
+ $searchDataFile->spew(encode_json(\@updatedFiles), 'UTF-8');
+
+ return \@updatedFiles;
+}
+
+1;
diff --git a/lib/WeBWorK/Utils/PODParser.pm b/lib/WeBWorK/Utils/PODParser.pm
new file mode 100644
index 0000000000..57950d46b0
--- /dev/null
+++ b/lib/WeBWorK/Utils/PODParser.pm
@@ -0,0 +1,66 @@
+package WeBWorK::Utils::PODParser;
+use parent qw(Pod::Simple::XHTML);
+
+use strict;
+use warnings;
+
+use Pod::Simple::XHTML;
+use File::Basename qw(basename);
+
+# $podFiles must be provided in order for pod links to local files to work. It should be the
+# first return value of the POD::Simple::Search survey method.
+sub new {
+ my ($invocant, $podFiles) = @_;
+ my $class = ref $invocant || $invocant;
+ my $self = $class->SUPER::new(@_);
+ $self->perldoc_url_prefix('https://metacpan.org/pod/');
+ $self->index(1);
+ $self->backlink(1);
+ $self->html_charset('UTF-8');
+ $self->{podFiles} = $podFiles // {};
+ return bless $self, $class;
+}
+
+# Attempt to resolve links to local files. If a local file is not found, then
+# let Pod::Simple::XHTML resolve to a cpan link.
+sub resolve_pod_page_link {
+ my ($self, $target, $section) = @_;
+
+ unless (defined $target) {
+ print "Using internal page link.\n" if $self->{verbose} > 2;
+ return $self->SUPER::resolve_pod_page_link($target, $section);
+ }
+
+ my $podFound;
+ for (keys %{ $self->{podFiles} }) {
+ if ($target eq $_ =~ s/lib:://r || $target eq basename($self->{podFiles}{$_}) =~ s/\.pod$//r) {
+ $podFound =
+ $self->{assert_html_ext} ? ($self->{podFiles}{$_} =~ s/\.(pm|pl|pod)$/.html/r) : $self->{podFiles}{$_};
+ last;
+ }
+ }
+
+ if ($podFound) {
+ my $pod_url = $self->encode_entities($podFound =~ s/^$self->{source_root}/$self->{base_url}/r)
+ . ($section ? '#' . $self->idify($self->encode_entities($section), 1) : '');
+ print "Resolved local pod link for $target" . ($section ? "/$section" : '') . " to $pod_url\n"
+ if $self->{verbose} > 2;
+ return $pod_url;
+ }
+
+ print "Using cpan pod link for $target" . ($section ? "/$section" : '') . "\n" if $self->{verbose} > 2;
+ return $self->SUPER::resolve_pod_page_link($target, $section);
+}
+
+# Trim spaces from the beginning of each line in code blocks. This attempts to
+# trim spaces from all lines in the code block in the same amount as there are
+# spaces at the beginning of the first line. Note that Pod::Simple::XHTML has
+# already converted tab characters into 8 spaces.
+sub handle_code {
+ my ($self, $code) = @_;
+ my $start_spaces = length(($code =~ /^( *)/)[0]) || '';
+ $self->SUPER::handle_code($code =~ s/^( {1,$start_spaces})//gmr);
+ return;
+}
+
+1;
diff --git a/lib/WeBWorK/Utils/PODtoHTML.pm b/lib/WeBWorK/Utils/PODtoHTML.pm
new file mode 100644
index 0000000000..19c30e7f50
--- /dev/null
+++ b/lib/WeBWorK/Utils/PODtoHTML.pm
@@ -0,0 +1,212 @@
+package WeBWorK::Utils::PODtoHTML;
+
+use strict;
+use warnings;
+use utf8;
+
+use Pod::Simple::Search;
+use Mojo::Template;
+use Mojo::DOM;
+use Mojo::Collection qw(c);
+use File::Path qw(make_path);
+use File::Basename qw(dirname);
+use IO::File;
+use POSIX qw(strftime);
+
+use WeBWorK::Utils::PODParser;
+
+our @sections = (
+ doc => 'Documentation',
+ bin => 'Scripts',
+ macros => 'Macros',
+ lib => 'Libraries',
+);
+our %macro_names = (
+ answers => 'Answers',
+ contexts => 'Contexts',
+ core => 'Core',
+ deprecated => 'Deprecated',
+ graph => 'Graph',
+ math => 'Math',
+ misc => 'Miscellaneous',
+ parsers => 'Parsers',
+ ui => 'User Interface'
+);
+
+sub new {
+ my ($invocant, %o) = @_;
+ my $class = ref $invocant || $invocant;
+
+ my @section_list = ref($o{sections}) eq 'ARRAY' ? @{ $o{sections} } : @sections;
+ my $section_hash = {@section_list};
+ my $section_order = [ map { $section_list[ 2 * $_ ] } 0 .. $#section_list / 2 ];
+ delete $o{sections};
+
+ my $self = {
+ %o,
+ idx => {},
+ section_hash => $section_hash,
+ section_order => $section_order,
+ macros_hash => {},
+ };
+ return bless $self, $class;
+}
+
+sub convert_pods {
+ my $self = shift;
+ my $source_root = $self->{source_root};
+ my $dest_root = $self->{dest_root};
+
+ my $regex = join('|', map {"^$_"} @{ $self->{section_order} });
+
+ my ($name2path, $path2name) = Pod::Simple::Search->new->inc(0)->limit_re(qr!$regex!)->survey($self->{source_root});
+ for (keys %$path2name) {
+ print "Processing file: $_\n" if $self->{verbose} > 1;
+ $self->process_pod($_, $name2path);
+ }
+
+ $self->write_index("$dest_root/index.html");
+
+ return;
+}
+
+sub process_pod {
+ my ($self, $pod_path, $pod_files) = @_;
+
+ my $pod_name;
+
+ my ($subdir, $filename) = $pod_path =~ m|^$self->{source_root}/(?:(.*)/)?(.*)$|;
+
+ my ($subdir_first, $subdir_rest) = ('', '');
+
+ if (defined $subdir) {
+ if ($subdir =~ m|/|) {
+ ($subdir_first, $subdir_rest) = $subdir =~ m|^([^/]*)/(.*)|;
+ } else {
+ $subdir_first = $subdir;
+ }
+ }
+
+ $pod_name = (defined $subdir_rest ? "$subdir_rest/" : '') . $filename;
+ if ($filename =~ /\.pl$/) {
+ $filename =~ s/\.pl$/.html/;
+ } elsif ($filename =~ /\.pod$/) {
+ $pod_name =~ s/\.pod$//;
+ $filename =~ s/\.pod$/.html/;
+ } elsif ($filename =~ /\.pm$/) {
+ $pod_name =~ s/\.pm$//;
+ $pod_name =~ s|/+|::|g;
+ $filename =~ s/\.pm$/.html/;
+ } elsif ($filename !~ /\.html$/) {
+ $filename .= '.html';
+ }
+
+ $pod_name =~ s/^(\/|::)//;
+
+ my $html_dir = $self->{dest_root} . (defined $subdir ? "/$subdir" : '');
+ my $html_path = "$html_dir/$filename";
+ my $html_rel_path = defined $subdir ? "$subdir/$filename" : $filename;
+
+ $self->update_index($subdir, $html_rel_path, $pod_name);
+ make_path($html_dir);
+ my $html = $self->do_pod2html(
+ pod_path => $pod_path,
+ pod_name => $pod_name,
+ pod_files => $pod_files
+ );
+ my $fh = IO::File->new($html_path, '>:encoding(UTF-8)')
+ or die "Failed to open file '$html_path' for writing: $!\n";
+ print $fh $html;
+
+ return;
+}
+
+sub update_index {
+ my ($self, $subdir, $html_rel_path, $pod_name) = @_;
+
+ $subdir =~ s|/.*$||;
+ my $idx = $self->{idx};
+ my $sections = $self->{section_hash};
+ if ($subdir eq 'macros') {
+ $idx->{macros} = [];
+ if ($pod_name =~ m!^(.+)/(.+)$!) {
+ push @{ $self->{macros_hash}{$1} }, [ $html_rel_path, $2 ];
+ } else {
+ push @{ $idx->{doc} }, [ $html_rel_path, $pod_name ];
+ }
+ } elsif (exists $sections->{$subdir}) {
+ push @{ $idx->{$subdir} }, [ $html_rel_path, $pod_name ];
+ } else {
+ warn "no section for subdir '$subdir'\n";
+ }
+
+ return;
+}
+
+sub write_index {
+ my ($self, $out_path) = @_;
+
+ my $fh = IO::File->new($out_path, '>:encoding(UTF-8)') or die "Failed to open index '$out_path' for writing: $!\n";
+ print $fh Mojo::Template->new(vars => 1)->render_file(
+ "$self->{template_dir}/category-index.mt",
+ {
+ title => 'POD for ' . ($self->{source_root} =~ s|^.*/||r),
+ dest_url => $self->{dest_url},
+ home_url => $self->{home_url},
+ home_url_link_name => $self->{home_url_link_name},
+ pod_index => $self->{idx},
+ sections => $self->{section_hash},
+ section_order => $self->{section_order},
+ macros => $self->{macros_hash},
+ macros_order => [ sort keys %{ $self->{macros_hash} } ],
+ macro_names => \%macro_names,
+ date => strftime('%a %b %e %H:%M:%S %Z %Y', localtime)
+ }
+ );
+
+ return;
+}
+
+sub do_pod2html {
+ my ($self, %o) = @_;
+
+ my $psx = WeBWorK::Utils::PODParser->new($o{pod_files});
+ $psx->{source_root} = $self->{source_root};
+ $psx->{verbose} = $self->{verbose};
+ $psx->{assert_html_ext} = 1;
+ $psx->{base_url} = $self->{page_url} // $self->{dest_url} // '';
+ $psx->output_string(\my $html);
+ $psx->html_header('');
+ $psx->html_footer('');
+ $psx->parse_file($o{pod_path});
+
+ my $dom = Mojo::DOM->new($html);
+ my $podIndexUL = $dom->at('ul[id="index"]');
+ my $podIndex = $podIndexUL ? $podIndexUL->find('ul[id="index"] > li') : c();
+ for (@$podIndex) {
+ $_->attr({ class => 'nav-item' });
+ $_->at('a')->attr({ class => 'nav-link p-0' });
+ for (@{ $_->find('ul') }) {
+ $_->attr({ class => 'nav flex-column w-100' });
+ }
+ for (@{ $_->find('li') }) {
+ $_->attr({ class => 'nav-item' });
+ $_->at('a')->attr({ class => 'nav-link p-0' });
+ }
+ }
+ my $podHTML = $podIndexUL ? $podIndexUL->remove : $html;
+
+ return Mojo::Template->new(vars => 1)->render_file(
+ "$self->{template_dir}/pod.mt",
+ {
+ title => $o{pod_name},
+ dest_url => $self->{dest_url},
+ home_url => $self->{home_url},
+ home_url_link_name => $self->{home_url_link_name},
+ index => $podIndex,
+ content => $podHTML
+ }
+ );
+}
+
+1;
diff --git a/tutorial/sample-problems/README.md b/tutorial/sample-problems/README.md
index 4b8b56ce11..3d2cbd2ed7 100644
--- a/tutorial/sample-problems/README.md
+++ b/tutorial/sample-problems/README.md
@@ -73,16 +73,16 @@ All lines following the documentation lines are considered code until the next `
## Generate the documentation
-The documentation is generated with the `parse-prob-doc.pl` script in the `bin`
+The documentation is generated with the `parse-problem-doc.pl` script in the `bin`
directory of pg. There are the following options (and many are required):
-- `problem_dir` or `d`: The directory where the sample problems are. This defaults to
+- `problem-dir` or `d`: The directory where the sample problems are. This defaults to
`PG_ROOT/tutorial/sample-problems` if not passed in.
-- `out_dir` or `o`: The directory where the resulting documentation files (HTML)
+- `out-dir` or `o`: The directory where the resulting documentation files (HTML)
will be located.
-- `pod_root` or `p`: The URL where the POD is located. This is needed to
+- `pod-base-url` or `p`: The URL where the POD is located. This is needed to
correctly link POD from the sample problems.
-- `pg_doc_home` or `h`: The URL of the directory for `out_dir`. This is needed
+- `sample-problem-base-url` or `s`: The URL of the directory for `out-dir`. This is needed
for correct linking.
- `verbose` or `v`: verbose mode.
@@ -94,7 +94,7 @@ produce four different ways of categorizing the problems.
- an html file with the documented PG file
- a pg file with the documentation removed. There is a link to this in the html file.
-The script `parse-prob-doc.pl` parses each pg file and uses the `problem-template.mt`
+The script `parse-problem-doc.pl` parses each pg file and uses the `problem-template.mt`
template file to generate the
html. This template is processed using the `Mojo::Template` Perl module. See the
[Mojo::Template documentation](https://docs.mojolicious.org/Mojo/Template) for more information.
diff --git a/tutorial/templates/general-layout.mt b/tutorial/templates/general-layout.mt
index 3f50146abd..b7d9fac40d 100644
--- a/tutorial/templates/general-layout.mt
+++ b/tutorial/templates/general-layout.mt
@@ -5,7 +5,7 @@
PG Sample Problems
-
+
-
+
diff --git a/tutorial/templates/index.html b/tutorial/templates/index.html
new file mode 100644
index 0000000000..aa271047f4
--- /dev/null
+++ b/tutorial/templates/index.html
@@ -0,0 +1,139 @@
+
+
+
+ PG Documentation
+
+
+
+
+
+
+
+
+
+
+
PG Documentation
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ This is the documentation for PG, the problem authoring language for WeBWorK. The links below
+ include sample problems demonstrating problem authoring techniques and POD (Plain Old
+ Documentation), explaining how to use PG macros and related modules.
+