-
-
Notifications
You must be signed in to change notification settings - Fork 53
[navy_captain, likelihood_ratio_process] Typo fixes and update non-IID likelihood ratio processes #511
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR addresses typos in the navy_captain
lecture and adds a substantial new section to likelihood_ratio_process
covering non-IID likelihood ratio processes with Markov chain models.
- Corrects typos and improves clarity in the navy captain lecture
- Updates variable names from Greek letters (α, β) to Latin letters (A, B) for consistency
- Adds a comprehensive section on Markov chain likelihood ratio processes with theoretical analysis and simulations
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.
File | Description |
---|---|
lectures/navy_captain.md | Fixes typos, updates references to wald_friedman_2, changes variable names from α/β to A/B, and corrects mathematical formulations |
lectures/likelihood_ratio_process.md | Adds new section on Markov chain models with KL divergence rate analysis and simulation code |
if np.any((P_f > 0) & (P_g == 0)): | ||
return np.inf | ||
|
||
valid_mask = (P_f > 0) & (P_g > 0) | ||
|
||
log_ratios = np.zeros_like(P_f) | ||
log_ratios[valid_mask] = np.log(P_f[valid_mask] / P_g[valid_mask]) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The condition checks if P_f > 0 where P_g == 0, but this creates a logical issue. When P_g has zero entries where P_f is positive, the KL divergence should be infinite, but the function should handle this consistently with proper mathematical treatment of log(0) cases.
if np.any((P_f > 0) & (P_g == 0)): | |
return np.inf | |
valid_mask = (P_f > 0) & (P_g > 0) | |
log_ratios = np.zeros_like(P_f) | |
log_ratios[valid_mask] = np.log(P_f[valid_mask] / P_g[valid_mask]) | |
# Use np.errstate to handle log(0) cases gracefully | |
with np.errstate(divide='ignore', invalid='ignore'): | |
# Where P_f == 0, set log ratio to 0 (since 0 * log(0/q) = 0 by convention) | |
log_ratios = np.where(P_f > 0, np.log(P_f / P_g), 0.0) | |
Copilot uses AI. Check for mistakes.
@HumphreyYang this doesn't inspire confidence either
|
This PR fixes typos in
navy_captain
and adds an section discussing non-IID likelihood ratio processes with models generated by two Markov chains.