-
-
Notifications
You must be signed in to change notification settings - Fork 31
Reorganize parallel programming lectures and improve content flow #429
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Major restructuring of parallelization-related content across lectures to improve pedagogical flow and consolidate related material. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
The parallel programming lecture reorganization moved the Numba parallelization exercise to numba.md. Update the link accordingly. Related to QuantEcon/lecture-python-programming.myst#429 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
) The parallel programming lecture reorganization moved the Numba parallelization exercise to numba.md. Update the link accordingly. Related to QuantEcon/lecture-python-programming.myst#429 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-authored-by: Claude <noreply@anthropic.com> Co-authored-by: Matt McKay <mmcky@users.noreply.github.com>
- Add comprehensive "Random numbers and pure functions" section - Demonstrate NumPy's impure random number generation vs JAX's pure approach - Fix spelling errors: discusson→discussion, explict→explicit, parallelizaton→parallelization, hardward→hardware, sleve→sleeve, targetting→targeting - Fix grammar: "uses use"→"uses", "short that"→"shorter than", "function will"→"functions will", "Prevents"→"Prevent" - Fix missing jax. prefix in random number examples - Improve clarity and consistency throughout 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Major restructuring: - Move Functional Programming section earlier (after NumPy Replacement) - Integrate pure functions discussion into Random Numbers section - Move "Compiling non-pure functions" into JIT section - Add smooth transitions between sections This creates a logical progression: basics → philosophy → features Readers now understand WHY before seeing HOW, making JAX's design choices (like explicit random state) more intuitive. Also fix syntax errors in timer code blocks (missing colons). 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
JAX Intro Lecture ImprovementsThis PR now includes significant improvements to Content Additions
Structural ReorganizationReorganized the lecture for better pedagogical flow: New section order:
Why this improves the lecture:
Quality Improvements
The lecture now provides a clear, logical progression that makes JAX's design choices intuitive rather than mysterious. |
This commit includes pedagogical improvements across three lectures: **numba.md:** - Improve sentence flow with better transitions - Change Wikipedia multithreading link to internal reference - Add "(multithreading)=" label to Multithreaded Loops section - Remove "Numba will be a key part of our lectures..." sentence - Add transition phrase "Beyond speed gains from compilation" - Clarify NumPy arrays "which have well-defined types" - Change "For example" to "Notably" for better flow - Add "Conversely" transition for prange vs range comparison **numpy.md:** - Add "### Basics" subheading for better organization - Emphasize "flat" array concept with bold formatting - Improve shape attribute explanation with inline comments - Remove np.asarray vs np.array comparison examples - Remove np.genfromtxt reference, keep only np.loadtxt - Remove redundant note about zero-based indices - Improve searchsorted() description formatting - Remove redundant NumPy function examples (np.sum, np.mean) - Simplify matrix multiplication section (remove old Python version notes) - Simplify @ operator examples, remove redundant demonstrations - Remove manual for-loop equivalent of broadcasting - Remove higher-dimensional broadcasting code examples - Remove higher-dimensional ValueError example - Add "### Mutability" subheading and improve organization - Change "Vectorized Functions" to "Universal Functions" heading - Emphasize terminology with bold: **vectorized functions**, **ufuncs**, **universal functions** - Add note about JAX's np.vectorize - Remove "Speed Comparisons" section (moved to numpy_vs_numba_vs_jax.md) - Remove "Implicit Multithreading in NumPy" section (moved to numpy_vs_numba_vs_jax.md) **numpy_vs_numba_vs_jax.md:** - Change title from "Parallelization" to "NumPy vs Numba vs JAX" - Add jax to pip install command - Add missing imports: random, mpl_toolkits.mplot3d, matplotlib.cm - Add "### Speed Comparisons" section (moved from numpy.md) - Add "### Vectorization vs Loops" section (moved from numpy.md) - Add "### Universal Functions" section (moved from numpy.md) - Add "### Implicit Multithreading in NumPy" section (moved from numpy.md) - Change "some examples" to "an example" in multithreading description 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Latest Update: Pedagogical Improvements and Content ReorganizationThis update includes improvements to sentence flow and content organization across three lectures. All changes have been tested by converting to Python with jupytext and executing successfully. 📝 numba.md - Flow ImprovementsInternal Linking:
Sentence Flow Improvements:
Content Trimming:
📝 numpy.md - Streamlining and ReorganizationContent Organization:
Terminology Improvements:
Content Removed (Redundant/Excessive):
Content Moved:
Enhancements:
📝 numpy_vs_numba_vs_jax.md - Major ReorganizationTitle Change:
Bug Fix:
Content Added (from numpy.md):
Minor Improvements:
✅ TestingAll three lecture files have been validated:
The reorganization improves pedagogical flow by:
|
- Fix incomplete sentence: add missing word "compiler" (line 229) - Fix header level inconsistency: change Multi-GPU Servers to ##### - Reorganize Overview section with clearer structure - Simplify Python's Scientific Ecosystem section - Restructure "Pure Python is slow" section for better flow - Add concrete vectorization speed comparison example - Improve parallelization section organization - Clarify GPU/TPU accelerator discussion - Remove redundant content and improve transitions throughout 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Updates to need_for_speed.mdThis commit includes grammar fixes and content reorganization to improve clarity and flow: Grammar & Spelling Fixes
Content Reorganization
Content Improvements
All changes maintain the technical accuracy while improving readability and pedagogical flow. |
Add missing `import random` statement to fix NameError when running the vectorization example code that uses random.uniform(). Tested by converting to Python with jupytext and running successfully. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Fix: Missing random module importFixed execution error in Issue: Code used Fix: Added Testing: Converted to Python using |
Changed level 5 headers (#####) to level 4 headers (####) to fix invalid header hierarchy that was causing build failures. Fixed headers: - "GPUs and TPUs" - "Why TPUs/GPUs Matter" - "Single GPU Systems" - "Multi-GPU Servers" These were incorrectly using ##### (level 5) directly under ### (level 3) headers, skipping level 4. Now properly using #### headers. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Fix: Header hierarchy inconsistenciesFixed invalid header structure that was causing build failures. Issue: Four headers were using level 5 ( Headers fixed:
Solution: Changed all four headers from This should resolve the build failure. |
Added comprehensive comparisons between NumPy, Numba, and JAX for both vectorized and sequential operations: - Added Numba simple loop and parallel versions for vectorized example - Demonstrated nested prange parallelization and its limitations - Added detailed discussion of parallelization overhead and contention issues - Implemented sequential operation (quadratic map) in both Numba and JAX - Used JAX lax.scan with @partial(jax.jit, static_argnums) for cleaner code - Added timing code with separate runs to show compile vs cached performance - Included educational discussion without specific numbers (machine-independent) - Added explanation of reduction problem challenges with shared variable updates - Fixed spelling error: "implict" → "implicit" - Added missing punctuation All code examples tested and verified to run successfully. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Completed numpy_vs_numba_vs_jax lectureThis commit completes the What was added:Vectorized Operations Section:
Sequential Operations Section:
Key Design Decisions:
Performance Insights (from testing):
All code has been tested and runs successfully end-to-end. |
- Standardize header capitalization in need_for_speed.md - Update code cell types to ipython3 in numba.md for consistency - Remove redundant parallelization warning section in numba.md - Enhance explanatory text and code clarity in numpy_vs_numba_vs_jax.md - Fix formatting and add missing validation checks 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Latest UpdateJust pushed improvements to formatting and clarity across the parallel computing lectures:
These changes improve the overall polish and readability of the lectures without affecting the core content. |
|
@mmcky would you mind looking at this? Was hoping to get these lectures finished this morning 😭 |
|
Context: i merged your changes to the environment into this PR. Might that be the issue? |
|
@jstac not sure why you're unlucky with the |
|
Thanks @mmcky !! 🥳 🥳 |
|
Merging! |
Overview
This PR comprehensively reorganizes the parallel programming lectures to improve pedagogical flow, consolidate related material, and modernize the coverage of parallel computing technologies.
Major Changes
1. Expanded and Reorganized
need_for_speed.mdAdded new parallelization content:
parallelization.mdgeforce.pngfor single GPU,dgx.pngfor multi-GPU server)Key improvements:
2. Renamed
parallelization.md→numpy_vs_numba_vs_jax.mdContent reorganization:
Moved content IN:
numpy.md(shows NumPy's implicit multithreading)jax_intro.md(demonstrates JAX vectorization withvmap)Result:
3. Enhanced
numba.mdContent additions:
parallelization.mdprangefor CPU parallelizationnumba_ex3,numba_ex4)Content removals:
Organizational improvements:
4. Streamlined
numpy.mdContent moved OUT:
numpy_vs_numba_vs_jax.mdResult:
5. Cleaned up
jax_intro.mdContent moved OUT:
numpy_vs_numba_vs_jax.mdLink updates:
numba.mdinstead of oldparallelization.mdResult:
6. Updated
_toc.ymlparallelizationtonumpy_vs_numba_vs_jax7. Cross-repository link updates
In
lecture-jaxrepository:jax_intro.mdlink fromparallelization.html→numba.htmlPedagogical Benefits
Testing
All modified lectures have been converted to Python and executed successfully:
need_for_speed.mdnumpy.mdnumba.mdnumpy_vs_numba_vs_jax.mdjax_intro.mdExpected errors in cells with
raises-exceptiontags were confirmed to work correctly.Breaking Changes
parallelization.md→numpy_vs_numba_vs_jax.mdparallelization.html→numpy_vs_numba_vs_jax.htmllecture-jaxrepositoryFiles Changed
Modified:
lectures/need_for_speed.md- Expanded parallelization coveragelectures/numpy.md- Streamlined contentlectures/numba.md- Added multithreading, reorganized exerciseslectures/numpy_vs_numba_vs_jax.md- Created fromparallelization.mdwith new structurelectures/jax_intro.md- Removed vectorization sectionlectures/_toc.yml- Updated file referenceAdded:
lectures/_static/lecture_specific/need_for_speed/geforce.pnglectures/_static/lecture_specific/need_for_speed/dgx.pngDeleted:
lectures/parallelization.md(renamed tonumpy_vs_numba_vs_jax.md)External:
lecture-jax/lectures/jax_intro.md- Updated link