Skip to content

Commit

Permalink
Merge pull request #1591 from FBruzzesi/patch/typos
Browse files Browse the repository at this point in the history
Fix: Typos
  • Loading branch information
CamDavidsonPilon committed Jan 15, 2024
2 parents b2be2ff + 0c0dd33 commit 2bd0627
Show file tree
Hide file tree
Showing 27 changed files with 182 additions and 196 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,8 @@ If you are new to survival analysis, wondering why it is useful, or are interest

## Contact
- Start a conversation in our [Discussions room](https://github.com/CamDavidsonPilon/lifelines/discussions).
- Some users have posted common questions at [stats.stackexchange.com](https://stats.stackexchange.com/search?tab=votes&q=%22lifelines%22%20is%3aquestion)
- creating an issue in the [Github repository](https://github.com/camdavidsonpilon/lifelines).
- Some users have posted common questions at [stats.stackexchange.com](https://stats.stackexchange.com/search?tab=votes&q=%22lifelines%22%20is%3aquestion).
- Creating an issue in the [Github repository](https://github.com/camdavidsonpilon/lifelines).

## Development

Expand Down
28 changes: 14 additions & 14 deletions docs/Changelog.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1063,7 +1063,7 @@ New features
- new ``lifelines.plotting.rmst_plot`` for pretty figures of survival
curves and RMSTs.
- new variance calculations for
``lifelines.utils.resticted_mean_survival_time``
``lifelines.utils.restricted_mean_survival_time``
- performance improvements on regression models’ preprocessing. Should
make datasets with high number of columns more performant.

Expand Down Expand Up @@ -1760,7 +1760,7 @@ Bug fixes
-------------------

- Some performance improvements to ``CoxPHFitter`` (about 30%). I know
it may seem silly, but we are now about the same or slighty faster
it may seem silly, but we are now about the same or slightly faster
than the Cox model in R’s ``survival`` package (for some testing
datasets and some configurations). This is a big deal, because 1)
lifelines does more error checking prior, 2) R’s cox model is written
Expand Down Expand Up @@ -1800,7 +1800,7 @@ New features
API changes
~~~~~~~~~~~

- ``inital_beta`` in Cox model’s ``.fit`` is now ``initial_point``.
- ``initial_beta`` in Cox model’s ``.fit`` is now ``initial_point``.
- ``initial_point`` is now available in AFT models and
``CoxTimeVaryingFitter``
- the DataFrame ``confidence_intervals_`` for univariate models is
Expand Down Expand Up @@ -1853,7 +1853,7 @@ New features

- new AFT models: ``LogNormalAFTFitter`` and ``LogLogisticAFTFitter``.
- AFT models now accept a ``weights_col`` argument to ``fit``.
- Robust errors (sandwich errors) are now avilable in AFT models using
- Robust errors (sandwich errors) are now available in AFT models using
the ``robust=True`` kwarg in ``fit``.
- Performance increase to ``print_summary`` in the ``CoxPHFitter`` and
``CoxTimeVaryingFitter`` model.
Expand Down Expand Up @@ -2066,7 +2066,7 @@ Bug Fixes
Series now (use to be numpy arrays)
- remove ``alpha`` keyword from all statistical functions. This was
never being used.
- Gone are astericks and dots in ``print_summary`` functions that
- Gone are asterisks and dots in ``print_summary`` functions that
represent signficance thresholds.
- In models’ ``summary`` (including ``print_summary``), the ``log(p)``
term has changed to ``-log2(p)``. This is known as the s-value. See
Expand Down Expand Up @@ -2105,7 +2105,7 @@ Bug Fixes
-------------------

- Fix in ``compute_residuals`` when using ``schoenfeld`` and the
minumum duration has only censored subjects.
minimum duration has only censored subjects.

.. _section-91:

Expand Down Expand Up @@ -2154,7 +2154,7 @@ Bug Fixes
- ``weights_col`` is added
- ``nn_cumulative_hazard`` is removed (may add back)

- some plotting improvemnts to ``plotting.plot_lifetimes``
- some plotting improvements to ``plotting.plot_lifetimes``

.. _section-94:

Expand Down Expand Up @@ -2203,7 +2203,7 @@ Bug Fixes
- ``statistics.pairwise_logrank_test`` now returns a
``StatisticalResult`` object instead of a nasty NxN DataFrame 💗
- Display log(p-values) as well as p-values in ``print_summary``. Also,
p-values below thesholds will be truncated. The orignal p-values are
p-values below thresholds will be truncated. The original p-values are
still recoverable using ``.summary``.
- Floats ``print_summary`` is now displayed to 2 decimal points. This
can be changed using the ``decimal`` kwarg.
Expand Down Expand Up @@ -2275,7 +2275,7 @@ Bug Fixes
https://www.cs.cmu.edu/~pradeepr/convexopt/Lecture_Slides/Newton_methods.pdf.
Details about the Newton-decrement are added to the ``show_progress``
statements.
- Minimum suppport for scipy is 1.0
- Minimum support for scipy is 1.0
- Convergence errors in models that use Newton-Rhapson methods now
throw a ``ConvergenceError``, instead of a ``ValueError`` (the former
is a subclass of the latter, however).
Expand Down Expand Up @@ -2317,7 +2317,7 @@ Bug Fixes
that people could compute the p-values by hand incorrectly, a worse
outcome I think. So, this is my stance. P-values between 0.1 and 0.05
offer *very* little information, so they are removed. There is a
growing movement in statistics to shift “signficant” findings to
growing movement in statistics to shift “significant” findings to
p-values less than 0.01 anyways.
- New fitter for cumulative incidence of multiple risks
``AalenJohansenFitter``. Thanks @pzivich! See “Methodologic Issues
Expand Down Expand Up @@ -2458,7 +2458,7 @@ Bug Fixes
- added ``step_size`` param to ``CoxPHFitter.fit`` - the default is
good, but for extremely large or small datasets this may want to be
set manually.
- added a warning to ``CoxPHFitter`` to check for complete seperation:
- added a warning to ``CoxPHFitter`` to check for complete separation:
https://stats.idre.ucla.edu/other/mult-pkg/faq/general/faqwhat-is-complete-or-quasi-complete-separation-in-logisticprobit-regression-and-how-do-we-deal-with-them/
- Additional functionality to ``utils.survival_table_from_events`` to
bin the index to make the resulting table more readable.
Expand All @@ -2480,7 +2480,7 @@ Bug Fixes
0.11.2
------

- Changing liscense to valilla MIT.
- Changing license to valilla MIT.
- Speed up ``NelsonAalenFitter.fit`` considerably.

.. _section-114:
Expand Down Expand Up @@ -2555,7 +2555,7 @@ Bug Fixes
-----

- deprecates Pandas versions before 0.18.
- throw an error if no admissable pairs in the c-index calculation.
- throw an error if no admissible pairs in the c-index calculation.
Previously a NaN was returned.

.. _section-120:
Expand Down Expand Up @@ -2749,7 +2749,7 @@ Bug Fixes
Also some good speed improvements.
- KaplanMeierFitter and NelsonAalenFitter now have a ``_label``
property that is passed in during the fit.
- KaplanMeierFitter/NelsonAalenFitter’s inital ``alpha`` value is
- KaplanMeierFitter/NelsonAalenFitter’s initial ``alpha`` value is
overwritten if a new ``alpha`` value is passed in during the ``fit``.
- New method for KaplanMeierFitter: ``conditional_time_to``. This
returns a DataFrame of the estimate: med(S(t \| T>s)) - s, human
Expand Down
2 changes: 1 addition & 1 deletion docs/Examples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -941,7 +941,7 @@ In both cases, the reported standard errors from a unadjusted Cox model will be
rossi = load_rossi()
# this may come from a database, or other libraries that specialize in matching
mathed_pairs = [
matched_pairs = [
(156, 230),
(275, 228),
(61, 252),
Expand Down
4 changes: 2 additions & 2 deletions docs/Survival Regression.rst
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ New in v0.25.0, We can also use ✨formulas✨ to handle the right-hand-side of
cph.fit(rossi, duration_col='week', event_col='arrest', formula="fin + wexp + age * prio")
is analgous to the linear model with interaction term:
is analogous to the linear model with interaction term:

.. math::
\beta_1\text{fin} + \beta_2\text{wexp} + \beta_3 \text{age} + \beta_4 \text{prio} + \beta_5 \text{age} \cdot \text{prio}
Expand Down Expand Up @@ -513,7 +513,7 @@ Below we compare the non-parametric and the fully parametric baseline survivals:
ax = cph_spline.baseline_cumulative_hazard_[bch_key].plot(label="spline")
cph_semi.baseline_cumulative_hazard_[bch_key].plot(ax=ax, drawstyle="steps-post", label="semi")
cph_piecewise.baseline_cumulative_hazard_[bch_key].plot(ax=ax, label="peicewise[20,35]")
cph_piecewise.baseline_cumulative_hazard_[bch_key].plot(ax=ax, label="piecewise[20,35]")
plt.legend()
Expand Down
2 changes: 1 addition & 1 deletion docs/Time varying survival regression.rst
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ The new dataset looks like:
+--+-----+----+----+-----+


You'll also have secondary dataset that references future measurements. This could come in two "types". The first is when you have a variable that changes over time (ex: administering varying medication over time, or taking a tempature over time). The second types is an event-based dataset: an event happens at some time in the future (ex: an organ transplant occurs, or an intervention). We will address this second type later. The first type of dataset may look something like:
You'll also have secondary dataset that references future measurements. This could come in two "types". The first is when you have a variable that changes over time (ex: administering varying medication over time, or taking a temperature over time). The second types is an event-based dataset: an event happens at some time in the future (ex: an organ transplant occurs, or an intervention). We will address this second type later. The first type of dataset may look something like:

Example:

Expand Down
12 changes: 6 additions & 6 deletions docs/jupyter_notebooks/Cox residuals.ipynb

Large diffs are not rendered by default.

12 changes: 6 additions & 6 deletions docs/jupyter_notebooks/Custom Regression Models.ipynb

Large diffs are not rendered by default.

26 changes: 13 additions & 13 deletions docs/jupyter_notebooks/Modelling time-lagged conversion rates.ipynb

Large diffs are not rendered by default.

Large diffs are not rendered by default.

12 changes: 6 additions & 6 deletions docs/jupyter_notebooks/Proportional hazard assumption.ipynb

Large diffs are not rendered by default.

12 changes: 6 additions & 6 deletions examples/Cox residuals.ipynb

Large diffs are not rendered by default.

12 changes: 6 additions & 6 deletions examples/Custom Regression Models.ipynb

Large diffs are not rendered by default.

44 changes: 15 additions & 29 deletions examples/Customer Churn.ipynb

Large diffs are not rendered by default.

26 changes: 13 additions & 13 deletions examples/Modelling time-lagged conversion rates.ipynb

Large diffs are not rendered by default.

Large diffs are not rendered by default.

12 changes: 6 additions & 6 deletions examples/Proportional hazard assumption.ipynb

Large diffs are not rendered by default.

30 changes: 15 additions & 15 deletions examples/SaaS churn and piecewise regression models.ipynb

Large diffs are not rendered by default.

0 comments on commit 2bd0627

Please sign in to comment.