Skip to content
This repository has been archived by the owner on Dec 18, 2023. It is now read-only.

Regression tutorials #1195

Closed
wants to merge 1,797 commits into from
Closed

Regression tutorials #1195

wants to merge 1,797 commits into from

Conversation

feynmanliang
Copy link
Contributor

Motivation

Tutorial improvements for OSS release

Changes proposed

  • Merges robust and ordinary linear regression tutorials
  • Cleans up robust regression to run without errors

Test Plan

Manual review

Types of changes

  • Docs change / refactoring / dependency upgrade
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)

Checklist

  • My code follows the code style of this project.
  • My change requires a change to the documentation.
  • I have updated the documentation accordingly.
  • I have read the CONTRIBUTING document.
  • I have added tests to cover my changes.
  • All new and existing tests passed.
  • The title of my pull request is a short description of the requested changes.

jpchen and others added 30 commits September 10, 2021 02:31
Summary:
Pull Request resolved: #995

D30672933 changes the singular matrix error message, so now we have to catch that.

Reviewed By: dme65, neerajprad

Differential Revision: D30852254

fbshipit-source-id: 8938bfe00261448d70589348ca55db267e1ce1d5
Summary:
Pull Request resolved: #993

To handle stochastic control flows of the form

    some_rv(some_stochastic_expression_with_finite_support)

we need to enumerate that finite support during graph accumulation. However, there could be thousands of nodes that are ancestors of that stochastic expression, and the support might depend on all of them. We cannot compute the support recursively because if there is a long path through the graph, we blow Python's recursion limit.

We already have a base class which implements non-recursive computation and memoization of a per-node property; this is the base class used by the lattice typer and node sizer. We therefore extend it once more to assign supports to nodes.

We represent a support by a set of tensors. Much of the time this will just be an ordinary deduplicated set of tensors; there are three cases where it is not:

* If the support of a node is infinite -- say, a sample from a normal -- then we represent it as a special Infinite value.
* Some stochastic nodes will have finite but large support. We can estimate the size of the support before we compute it, and if it is likely to be too large, we skip the computation and represent it as a special "too big" value
* Nodes for which we have not yet implemented computation of the support will be represented by an "unknown" value.

In this diff I just implement computing support for:

* samples from infinite-support distributions
* samples from Bernoulli draws of arbitrary tensor size
* construction of stochastic tensors

We'll add the rest in subsequent diffs.

We have an existing implementation of support computation via instance methods which is (1) recursive and (2) frequently wrong. Once all necessary nodes have their support computation added to the new class, I will delete the instance methods.

Reviewed By: wtaha

Differential Revision: D30050828

fbshipit-source-id: abb99eff3aba671812238a9721fef04e022b4817
Summary:
By design, we lack facilities in BMG to run inference on vectorized models. That is, BMG for example supports sampling from Bernoulli(0.5) to get a Boolean, and it supports independent-identical-distribution sampling from Bernoulli(0.5) to get a vector of independent Booleans, but it does NOT support sampling from Bernoulli([0.25, 0.75]) to get a vector of independent but *non-identical* samples.

However we would like to be able to compile such models to BMG if possible. Doing so will allow us to test whether we still get a performance improvement from BMG for such models.

Doing so will require a series of changes. First, we need to know when a model even is vectorized, and when the vectors are of the right shape to permit compilation to BMG. (Recall BMG only supports atomic values and 2-d matrices of atomic values.) My recent improvements to the tensor size computation on graphs will help solve that problem.

Second, we need to find an appropriate transformation from the vectorized graph to a less vectorized graph that BMG can handle. We will need to have custom code for each distribution type, so there will be many changes required to get there.

This diff simply implements a proof of concept that we can in fact do so successfully in principle. It implements a transformation ONLY on Bernoulli distributions where the input is a `Size([2])` tensor.

As you can see from the before and after graphs, this works!  Note though that there is a small inefficiency: a rewrite which introduces a constant index into a constant tensor does not simply add the constant result to the graph. I will add that optimization in a later diff.

This transformation correctly handles queries, which can be on any node, but does not correctly handle observations. If we have an observation of a `Size([2])` sample, we will end up with a graph where we have an observation of a ToMatrix, which is wrong. BMG only permits observations of sample nodes.  In an upcoming diff I will add a feature to the rewriter to handle this case.

BEFORE:

{F639574372}

AFTER:

{F639574527}

Reviewed By: wtaha

Differential Revision: D30165910

fbshipit-source-id: d51331c522cf75f4d592bf743b637065c45d5e42
* Create docs.yml

* Update docs.yml

* Update Makefile

* Create main.yml

* Update and rename main.yml to docs.yml

* Delete docs.yml

* Update conf.py

* Update Makefile

* Update Makefile

* Update Makefile

* Update Makefile

* Update docs.yml

* Update docusaurus.config.js

* updated config
Summary:
Pull Request resolved: #997

When a rewriter generates an indexing operation it is possible to end up with constants for both the collection and the index; in that case we can write a constant folding optimization.  In this diff I refactored the index rewriter to make the logic a bit easier to follow by extracting the code for vector index to its own method, and then added a case for constants for both operands.

Graph before optimization implemented:

{F639574527}

after optimization:

{F639592295}

Reviewed By: wtaha

Differential Revision: D30167970

fbshipit-source-id: d7063927a1a636572d828165f4c8eb62614666b3
Summary:
Pull Request resolved: #996

In the past we used `numpy.cholesky` in NMC implementation because `torch.cholesky` fails very slowly in comparison. Since a workaround has been proposed in pytorch/pytorch#34272, we can now switch back to using torch and avoid the numpy conversion.

A small benchmark to show the run time difference with the workaround: N1136967

(I just happen to spot this when I was looking at NMC to see how to merge the infra together. There will probably be quite a few changes coming on single site infra -- I will try to keep each of them small to make reviewing easier :).)

Reviewed By: jpchen, neerajprad

Differential Revision: D30895231

fbshipit-source-id: bbc375d005f2e39e7fe98f3f652414367f6a9814
Summary:
Pull Request resolved: #999

My model unvectorizer prototype still only supports one kind of model: a Bernoulli with two inputs instead of one.  In this diff I add the ability to have a two-value observation on such a Bernoulli; the unvectorizer rewrites it into two observations of two samples and deletes the original sample and observation.

I had assertions that observations were always observing samples, which is an invariant of BMG. However, due to the incremental nature of the rewriting process we end up with a temporary situation where an observation observes a stochastic tensor. I've removed those assertions.

Reviewed By: wtaha

Differential Revision: D30500846

fbshipit-source-id: b0bd26889fa5e215236914847ae255ec107e2e82
Summary:
feynmanliang my fork of `facebookincubator/flowtorch` was deleted when I connected my FB employee account to GitHub, so I have duplicated facebookincubator/flowtorch#58 here (and added some additional functionality)

### Motivation
Shape information for a normalizing flow only becomes known when the base distribution has been specified. We have been searching for an ideal solution to express the delayed instantiation of Bijector and Params for this purpose. Several possible solutions are outlined in facebookincubator/flowtorch#57.

### Changes proposed
The purpose of this PR is to showcase a prototype for a solution that uses metaclasses to express delayed instantiation. This works by intercepting .__call__ when a class initiated and returning a lazy wrapper around the class and bound arguments if only partial arguments are given to .__init__. If all arguments are given then the actual object is initialized. The lazy wrapper can have additional arguments bound to it, and will only become non-lazy when all the arguments are filled (or have defaults).

Pull Request resolved: facebookincubator/flowtorch#59

Reviewed By: jpchen

Differential Revision: D30782184

Pulled By: stefanwebb

fbshipit-source-id: f0be468015f298bfa0b40412142c493400c7efec
…ors (#1000)

Summary:
Pull Request resolved: #1000

I am continuing to experiment with my prototype devectorizer; previously we only supported the test case of a Bernoulli with a probability of size 2. We now implement Bernoulli with probability of any size >= 2 so long as it is single-dimensional. Supporting two-dimensional probabilities, or distributions other than Bernoulli, is not yet implemented.

Reviewed By: wtaha

Differential Revision: D30522236

fbshipit-source-id: 2a454ad40d1be42fa0b567b620c86e2f2025551b
Reviewed By: zertosh

Differential Revision: D30983102

fbshipit-source-id: 595f4f98ee2f72d8ec56fb6d7277c773ed778e45
Summary:
Pull Request resolved: #955

See #954 (comment)

Reviewed By: horizon-blue

Differential Revision: D30378532

fbshipit-source-id: ca0af33b07aa4681b1abd87be22b3683daa0dc75
Summary:
Records the max cardinality of a random_variable based on calling `update_support`. Only valid for finite support dists (Bernoulli and Categoricals right now).  Right now this will only be used in an enumerate method (upcoming diff), and so is not invoked each time `update_graph` is called.

Needed for discrete enumeration of RVs

Reviewed By: rodrigodesalvobraz

Differential Revision: D30390183

fbshipit-source-id: e05651da2708988d91ac3b99a65b0b09608d22d9
Summary:
Pull Request resolved: #1002

The devectorizer prototype now can handle 1-d or 2-d probabilities as inputs to a Bernoulli.

As noted in the test case, we do not yet do constant folding on multi-dimensional tensor indexing where all operands are constants. I'll fix that up in an upcoming diff.

Reviewed By: wtaha

Differential Revision: D30527567

fbshipit-source-id: a8c5316e9083737dac65b7a8a217b0cc743e2b19
Summary:
Pull Request resolved: #1003

The compiler's graph optimization pass previously would optimize away indexing operations where the matrix was 1-dimensional and the index was a constant, but we can similarly optimize away indexing into 2-d matrices. This diff implements that optimization.

Graph before optimization; see how both the column and vector index operations are only constants.

{F657353375}

Graph after optimization; all index operations are eliminated:

{F657353554}

Reviewed By: wtaha

Differential Revision: D30583515

fbshipit-source-id: 9aa32fa632f377e126fac883c66b6d3542d0c153
Summary:
Pull Request resolved: #1004

I wish to remove the `size` property from all graph nodes, which means first eliminating all code which consumes this property.  The `_required_columns` property on a Dirichlet node uses this property; this does not need to be a property of the node.  We can compute it when we need it in the lattice typer or sizer. (And also the property is misnamed, as it has the semantics of required *rows*.  Might as well delete it instead of fixing the name.)

This diff begins the process of removing usages of this property, starting with the code generators.

Reviewed By: wtaha

Differential Revision: D30586326

fbshipit-source-id: 02e5bdd31fad378340555f2a3f5876bd97bbdbd4
Summary:
Pull Request resolved: #1005

Updated BeanMachine's setup.py so that it uses FlowTorch >= 0.3.

I am having trouble checking out D30972439, which is identical this this diff, to rerun tests that failed due to unknown infra problems...

The exact error is:
```
This patch cannot be applied:
patching file fbcode/beanmachine/sphinx/source/conf.py
Hunk #1 FAILED at 44
1 out of 1 hunks FAILED -- saving rejects to file fbcode/beanmachine/sphinx/source/conf.py.rej
abort: patch failed to apply```

Reviewed By: ericlippert

Differential Revision: D31025741

fbshipit-source-id: d45f0a8b744d792de8e49a7744b44a52325742f9
Summary:
Pull Request resolved: #1007

I have finished removing the incorrectly named, badly located and poorly implemented `_required_columns` property from `DirichletNode`.  The name was wrong because it actually computed required rows; it was badly located because the input requirement is a concern of the BMG requirements module, not the node, and it was poorly implemented because it tried to handle input types unsupported by BMG.

All these problems are now fixed; the logic is no longer in the node so the badly named method no longer exists. The code is now relocated to the lattice type and requirements modules. The implementation now makes no attempt to handle unsupported scenarios; in the case where a model has an unsupported tensor shape as the Dirichlet input we give up on attempting to determine its type and default to "1x1 simplex", and then give an error during requirements checking.

This diff furthers the larger objective of removing the `size` property from node classes.

Reviewed By: wtaha

Differential Revision: D30592028

fbshipit-source-id: a1f2cf458337b713c0eb35581c93f30f58924ade
Summary:
Pull Request resolved: #1008

In order to build a graph for control flows of the form `some_rv(some_other_rv())` we compute the support -- the set of all possible values -- for  `some_other_rv` and then determine the value of `some_rv` for each possibility.  This means that we must be able to compute the support of any node representing a value in the graph.

My intention in this refactoring is to move that code out of the node classes themselves and into a class similar to the one which computes node types. We require that this algorithm be non-recursive.

In this diff I implement support computation for the majority of the unary and binary operator nodes. This is quite straightforward because for all of those operators we can compute the support of an operator `foo(x)` or `foo(x, y)` by computing all possible combinations of the operands, and then computing the value of the operator for those combinations.

I have constructed a mapping from node type to a function which computes the value of the operator; we compute the Cartesian product of the operand supports and pass them to the appropriate function to compute the possible values.

In upcoming diffs I will remove the remaining callers of the existing instance methods and delete the instance methods.

Reviewed By: wtaha

Differential Revision: D30608810

fbshipit-source-id: 94ca273e3f1c6c3ab26c43e694a9121543a7e2bc
Summary:
Pull Request resolved: #1009

I am continuing to move logic for computing the support of a node out of the node classes and into a class specifically for this computation.  This time, I'm moving the logic out of the categorical node class.

Reviewed By: wtaha

Differential Revision: D30742929

fbshipit-source-id: c76d219faa879beab9339b0bc2e279a887d55db2
Summary:
Pull Request resolved: #1011

The code which computes the support of a "switch" -- a temporary node that we generate when there is a stochastic control flow in the model -- is now ported to the support computation model; the original code in the node class will be removed in the next diff.

Reviewed By: wtaha

Differential Revision: D30794614

fbshipit-source-id: 5db4ecf34538b2a19597e636f1b6931fa7185054
Summary:
Pull Request resolved: #1013

We need to know the support -- the set of possible values of a stochastic expression -- to handle compiling models of the form

    x = some_rv(some_finite_rv())

where some_finite_rv has small, finite support. It needs to be small and finite because we're going to actually call some_rv with each possible value.

For our purposes we define small as <= 1000.  (We should add a mechanism to tweak this parameter later.)

Previously we had each node compute its own support, but (1) this is not a concern of the node, and (2) we need extra mechanisms to also compute approximate support size, and (3) the implementation was recursive, which crashes if the model has a path that exceeds python's recursion limit.

We now have a module using the same nonrecurisve, mutation-aware algorithm that the type checker uses. In this diff I switch over the graph accumulator to use the new mechanism.  In the next diff I will delete the now-unused support instance methods.

Reviewed By: wtaha

Differential Revision: D30738627

fbshipit-source-id: 5e745f272d59844acd44e1ba2e36b13160dee42c
Summary:
Pull Request resolved: #1015

There's a previously unexercised code path in the requirements checker that Walid came across which caused an assertion to fire. This diff fixes that problem.

When we accumulate a graph we mark all the non-stochastic constant nodes as "untyped" initially, and assume that during requirements checking, every untyped constant will have an edge emerging from it that has a specific BMG type as its requirement.  We then replace the untyped constant node with a correctly typed BMG constant node.

This assumption is not necessarily correct; we could have a situation where an edge has an "any" requirement -- that is, allow an input of any type -- and the edge goes from an untyped constant to, say, an operator. In this situation we still need to deduce a type for the constant. Previously this never happened but Walid has created a scenario in which it does.

What if we have an "any" requirement but the node is a constant not representable at all in BMG? That seems bad. But by the type we get to requirements checking we have already rejected constant nodes that are not representable at all in BMG, so we don't need to worry about that. We know the node is representable in BMG; we just need to ensure that we produce a valid constant node.

What we do now is: if we are checking an outgoing edge requirement on a constant, and the requirement is "any", then the required BMG type of the node is the current BMG type of the value. However, there is one wrinkle: the lattice typer will classify False/0 and True/1 as having types Zero and One, to indicate that they are convertible to many other types. In those cases we'll deduce that what the user wants here is a bool constant node.  The requirement is "any" and a boolean meets that requirement.

Reviewed By: wtaha

Differential Revision: D31157835

fbshipit-source-id: 507fbb8fb00f59dcbaa4a00c7613fc22ca66f1db
Summary:
Pull Request resolved: #1016

As noted in the previous diff, I've removed the last caller of the instance-method based support computation and therefore can delete the dead code.

I also move some helper methods out of bmg_nodes and into the calling module.

Reviewed By: wtaha

Differential Revision: D30796720

fbshipit-source-id: efa15502b7fc087f706b6f30f04e5e41a72ac62a
…pes (#1014)

Summary:
Pull Request resolved: #1014

We expect sup(a,b) to be symmetric. That kind of property makes for nice tests, and caught a few minute discrepencies. This diff fixes them.  Would be fun to add a few more diffs like this, such as associativity of this operator.

Reviewed By: ericlippert

Differential Revision: D31172843

fbshipit-source-id: 59837d3ac54c8c77d5316cc25c2385fe2b0fc3d7
#1018)

Summary:
Pull Request resolved: #1018

tldr: the requirements fixer has complex preconditions that are easy to violate. This diff makes it easier on compiler developers to figure out what they've forgotten to do when the requirements fixer fails, by (1) failing early, and (2) producing a helpful error message.

excessive details:

The purpose of the problem fixing passes is to transform the graph into one where every ancestor of a query or observation is (1) supported by BMG, and (2) each edge in the graph that is on a path to a query or observation meets the requirements of the BMG type system.  That is to say: when the problem fixing passes are done, we should either have produced an error, or we can produce a BMG graph without causing BMG to throw an exception.

The requirements fixer checks to see if every such edge meets BMG's requirements; if an edge does not meet a requirement then the requirements checker either produces an error, or it mutates the graph to remove the offending edge. (For example, by inserting a TO_REAL node and rerouting the outgoing edge through it.)

The requirements fixer has many preconditions. It assumes:

* Every type requirement imposed on an edge is for an actual valid BMG type. For example, type analysis might determine that a node is a 3-dimensional tensor, but BMG only supports 2-dimensional matrices. Since the whole point of the requirement fixer is to make the graph compliant with the BMG type system, it makes no sense to ask it to find a way to make an edge point from a node with an unsupported type. If we ever ask to impose such a requirement, there is probably a bug in the code which computes requirements.

* Every node which has a requirement on its outgoing edge is a node that *has* an outgoing edge! We should never be imposing a requirement on a query, observation or factor, for instance, because they never have outgoing edges to begin with. If the compiler is ever trying to check outgoing edge requirements on such a node, then something has gone very wrong with the graph topology.

* Every node (except constants, which are fixed by the requirements checker itself) that has an outgoing edge being checked for validity is already a node supported by BMG. Again, the purpose of the requirements fixer is to make the edges right; if we have a node in the graph that is not even a valid BMG node then it makes no sense to check its edges; the graph cannot possibly be right. The requirements checker expects that the unsupported node fixer has already run, and has either produced an error (in which case the requirements checker should not run at all) or successfully removed all the unsupported nodes.

* Every node (except constants) can have a type assigned to it by the lattice typer. Even if every outgoing edge has a requirement of "any", we still want to maintain the invariant that all BMG nodes in the graph have a BMG node type associated with them. If they don't, that's evidence of a bug in the lattice typer.

Reviewed By: wtaha

Differential Revision: D31175997

fbshipit-source-id: f1b29810785d936dab66caeb93e4082b427dea22
Summary:
Pull Request resolved: #1019

We need the tensor size of a node during graph rewriting, particularly when devectorizing models.

I wish to move computation of the tensor size of a node out of the node classes; this is not their concern, and the implementation needs to be nonrecursive. All this functionality is now moved to sizer.py and I can delete the dead implementations on the node classes.

Reviewed By: wtaha

Differential Revision: D30811484

fbshipit-source-id: 447af5747afa68b329059d580304a89f76ec16a4
Summary: add NUTS as specified in paper to BMG

Reviewed By: rodrigodesalvobraz

Differential Revision: D29436569

fbshipit-source-id: 7428ff8d7868409ff5254e3ef46a7e8e81cf6dd9
Summary:
Pull Request resolved: #977

moving code around, no new implementations

move global proposers to global/proposer/ folder

add separate .h and .cpp file for each proposer, moving them out of global_proposer.h

Reviewed By: rodrigodesalvobraz

Differential Revision: D30682498

fbshipit-source-id: 37bb91f354669f618ac5251415c553d4bc078c56
Summary: Previously, GlobalState initialized the values of the random values when the MH algorithm was initialized, so all MH algorithms needed a seed during initialization. I'm moving this value initialization process to take place directly within `infer`.

Reviewed By: rodrigodesalvobraz

Differential Revision: D30557589

fbshipit-source-id: 1e270819f90051fe290627a52f22868fc5a0d328
Summary:
Added three types of global initialization:
RANDOM: Uniform(-2, 2) similar to Stan
ZERO: all 0
PRIOR: sample from prior

Reviewed By: rodrigodesalvobraz

Differential Revision: D30558325

fbshipit-source-id: c6533c9183f0127d9b1e6bcf20ab1aef37b15db1
wtaha and others added 17 commits December 7, 2021 15:15
)

Summary:
Pull Request resolved: #1185

Previously there was only a Robust Linear Regression tutorial and not the basic Linear Regression one. This diff adds the one that was missing. This version also includes the documentation for running BMGInference.

Note: This code ran without issue on Bento, but testing it on jupyter on a laptop generated errors. We should deal with getting it to run on laptops in a separate diff.

Reviewed By: jpchen

Differential Revision: D32931043

fbshipit-source-id: 6e6b60750a98e1f613eaa4fa052c37b1432fc2ff
Summary: making website

Reviewed By: wtaha

Differential Revision: D32888718

fbshipit-source-id: b6a273bffc5c562b1869f36d6d2c83e0f5832bc0
Summary:
### Motivation
Rather than include all symbols (modules, classes, functions etc.) under `beanmachine.*`, we decided as a group to manually include modules as needed.

### Changes proposed
I have modified the `documentation.toml` file so that only modules that should be included are specified in the regular expression. I have also added modules that should be excluded to the corresponding regex as an additional precaution.

There are some classes and functions within modules that should be excluded (in `filters.exclude.symbols`), however I haven't yet added the functionality to filter at the symbol level (only the module level) and will do so in a future diff

Pull Request resolved: #1161

Test Plan:
`python sphinx/source/docs.py` to view modules that will be documented by `make html`

**Static Docs Preview: beanmachine**
|[Full Site](https://our.intern.facebook.com/intern/staticdocs/eph/D32834151/V5/beanmachine/)|

|**Modified Pages**|

Reviewed By: jpchen

Differential Revision: D32834151

Pulled By: stefanwebb

fbshipit-source-id: 80c3a2183d8514cb0c45abaa0f145cdd1442e80a
Summary: Overhauls Random Walk docs. Lots of rewriting to add critical material that was not part of the original docs. Also some less critical material in terms of just making the docs a bit easier to read through and more consistent with other docs.

Reviewed By: wtaha

Differential Revision: D32908947

fbshipit-source-id: ebe3ca8db044218b9920442064ef8d7d196a59b4
Summary:
Pull Request resolved: #1187

This tutorial existed, including the stub entry in the tutorials section, but it did not link to the actual Notebook.

Reviewed By: michaeltingley

Differential Revision: D32897359

fbshipit-source-id: c41650b9b6a0b59e31618d255a825f7647383760
Summary: Add references and docs for hmc and nuts inference. I didn't document get_proposers since I specify at the class level it returns a single block proposer.

Reviewed By: horizon-blue

Differential Revision: D32890217

fbshipit-source-id: e3619c1799eb65ad5984517f49775f6c515ed6ac
Summary:
Pull Request resolved: #1189

Revamps Uniform MH docs. Overall, these docs are short and sweet already, which is great. The main additions are:

1. Clarification that this method affects _proposal_ but not _acceptance_ probabilities (compared to Ancestral).
2. A paragraph justifying this potentially counter-intuitive method. (It may increase sampling efficiency.)

Reviewed By: wtaha

Differential Revision: D32909134

fbshipit-source-id: db68465f3413997c224706081d08fda0d8b43613
Summary:
Pull Request resolved: #1186

This diff redirect most of the algorithms from our legacy implementation to the new implementation, which is done by swapping the reference in `beanmachine/ppl/inference/__init__.py`.

The following algorithms are updated:
- `SingleSiteAncestralMetropolisHastings`
- `SingleSiteNewtonianMonteCarlo`
- `SingleSiteRandomWalk`
- `SingleSiteUniformMetropolisHastings`

The following algorithms will be updated by the next diff in the stack:
- `CompositionalInference`
- `SingleSiteNoUTurnSampler`
- `SingleSiteHamiltonianMonteCarlo`

We should pay attention to tutorials to see if anything strange happens after this update :)

Reviewed By: jpchen

Differential Revision: D32929527

fbshipit-source-id: 5ad842d07c1ce5b36c8d1c95541ad27a9f5fd5de
Summary:
Pull Request resolved: #1190

We were splitting explanations of advantages of Bean Machine as those that belong to PPLs in general, and in comparison to other PPLs. Since the latter is of interest to a smaller portion of users, I moved it to "Advanced" but placed a tip on the first page pointing to it.
PS: file names and slugs remain the same, that needs to be re-organized in general.

Reviewed By: michaeltingley

Differential Revision: D32938541

fbshipit-source-id: f78960b499d69c3420fdaf5c49990fe4c501ed24
Reviewed By: zertosh

Differential Revision: D32948578

fbshipit-source-id: cef04737a7c57dc591c2f63cd479e239d9bfe008
Summary:
update Programmable Inference section with links to SSMH
rename Adaptation and Warmup to Adaptation

Reviewed By: wtaha

Differential Revision: D32926416

fbshipit-source-id: df06dd74a0552882557ebb24b30ebc5f1af51c38
Summary: Pull Request resolved: #1191

Reviewed By: jpchen

Differential Revision: D32905562

fbshipit-source-id: c4cad4a671c3d7c3be4506a7d2234c4b9e82905f
Summary:
Pull Request resolved: #1192

# Grab bag of minor edits to the tutorials page

Reorders tutorials so that:
  * The simplest tutorials appear first.
  * The highest quality tutorials appear next.
  * Remaining tutorials are ordered roughly in how they showcase Bean Machine functionality.

Fixes broken links.

Fleshes out tutorial descriptions.

Copy editing and consistency.

Reviewed By: wtaha, jpchen

Differential Revision: D32897420

fbshipit-source-id: 5e6fd0dbc56b5495c85a2056ad8e101365cb3c6b
Summary: Adds the zero-inflated count data tutorial to our listing. Note that the Notebook is already checked in, but there was no entry in the listing.

Reviewed By: wtaha, jpchen

Differential Revision: D32906207

fbshipit-source-id: 59fdc37d25af9489a4f35114493ae4143c7e645b
Summary: Overhauls NMC docs. I made things a bit cleaner to read, and added additional text on what a proposer is and how they're used in NMC. Also cleaned up the bits on adaptive NMC (it was unclear that adaptive NMC and learning rate were the same thing before).

Reviewed By: wtaha

Differential Revision: D32921087

fbshipit-source-id: a24c5d4a96f1a7606e2349457a2261574d182ffc
@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 8, 2021
@facebook-github-bot
Copy link
Collaborator

@feynmanliang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Collaborator

This pull request was exported from Phabricator. Differential Revision: D32965258

1 similar comment
@facebook-github-bot
Copy link
Collaborator

This pull request was exported from Phabricator. Differential Revision: D32965258

feynmanliang pushed a commit to feynmanliang/beanmachine that referenced this pull request Dec 15, 2021
Summary:
### Motivation
Tutorial improvements for OSS release

### Changes proposed
* Merges robust and ordinary linear regression tutorials
* Cleans up robust regression to run without errors

Pull Request resolved: facebookresearch#1195

Test Plan:
Manual review

### Types of changes
- [x] Docs change / refactoring / dependency upgrade
- [x] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)

### Checklist
- [x] My code follows the code style of this project.
- [x] My change requires a change to the documentation.
- [x] I have updated the documentation accordingly.
- [x] I have read the **[CONTRIBUTING](https://github.com/facebookincubator/BeanMachine/blob/master/CONTRIBUTING.md)** document.
- [ ] I have added tests to cover my changes.
- [ ] All new and existing tests passed.
- [x] The title of my pull request is a short description of the requested changes.

**Static Docs Preview: beanmachine**
|[Full Site](https://our.intern.facebook.com/intern/staticdocs/eph/D32965258/V3/beanmachine/)|

|**Modified Pages**|
|[docs/tutorials](https://our.intern.facebook.com/intern/staticdocs/eph/D32965258/V3/beanmachine/docs/tutorials/)|

Reviewed By: wtaha, jpchen

Differential Revision: D32965258

Pulled By: feynmanliang

fbshipit-source-id: c51760ea42de05d2b0bb18a5d04b5e38af5426ba
@facebook-github-bot
Copy link
Collaborator

This pull request was exported from Phabricator. Differential Revision: D32965258

feynmanliang pushed a commit to feynmanliang/beanmachine that referenced this pull request Dec 15, 2021
Summary:
### Motivation
Tutorial improvements for OSS release

### Changes proposed
* Merges robust and ordinary linear regression tutorials
* Cleans up robust regression to run without errors

Pull Request resolved: facebookresearch#1195

Test Plan:
Manual review

### Types of changes
- [x] Docs change / refactoring / dependency upgrade
- [x] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)

### Checklist
- [x] My code follows the code style of this project.
- [x] My change requires a change to the documentation.
- [x] I have updated the documentation accordingly.
- [x] I have read the **[CONTRIBUTING](https://github.com/facebookincubator/BeanMachine/blob/master/CONTRIBUTING.md)** document.
- [ ] I have added tests to cover my changes.
- [ ] All new and existing tests passed.
- [x] The title of my pull request is a short description of the requested changes.

**Static Docs Preview: beanmachine**
|[Full Site](https://our.intern.facebook.com/intern/staticdocs/eph/D32965258/V3/beanmachine/)|

|**Modified Pages**|
|[docs/tutorials](https://our.intern.facebook.com/intern/staticdocs/eph/D32965258/V3/beanmachine/docs/tutorials/)|

Reviewed By: wtaha, jpchen

Differential Revision: D32965258

Pulled By: feynmanliang

fbshipit-source-id: 5416b8bf8d6eb3552b40b604e2a9a08668e715f7
@facebook-github-bot
Copy link
Collaborator

This pull request was exported from Phabricator. Differential Revision: D32965258

@feynmanliang feynmanliang mentioned this pull request Dec 15, 2021
22 tasks
facebook-github-bot pushed a commit that referenced this pull request Dec 20, 2021
Summary:
allow-large-files

### Motivation
Tutorial improvements for OSS release

### Changes proposed
* Merges robust and ordinary linear regression tutorials
* Cleans up robust regression to run without errors

Pull Request resolved: #1195

Pull Request resolved: #1276

Test Plan:
Manual review

### Types of changes
- [x] Docs change / refactoring / dependency upgrade
- [x] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)

### Checklist
- [x] My code follows the code style of this project.
- [x] My change requires a change to the documentation.
- [x] I have updated the documentation accordingly.
- [x] I have read the **[CONTRIBUTING](https://github.com/facebookincubator/BeanMachine/blob/master/CONTRIBUTING.md)** document.
- [ ] I have added tests to cover my changes.
- [ ] All new and existing tests passed.
- [x] The title of my pull request is a short description of the requested changes.

**Static Docs Preview: beanmachine**
|[Full Site](https://our.intern.facebook.com/intern/staticdocs/eph/D33124449/V5/beanmachine/)|

|**Modified Pages**|
|[docs/tutorials](https://our.intern.facebook.com/intern/staticdocs/eph/D33124449/V5/beanmachine/docs/tutorials/)|
Please provide clear instructions on how the changes were verified. Attach screenshots if applicable.

### Types of changes
- [ ] Docs change / refactoring / dependency upgrade
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)

### Checklist
- [ ] My code follows the code style of this project.
- [ ] My change requires a change to the documentation.
- [ ] I have updated the documentation accordingly.
- [ ] I have read the **[CONTRIBUTING](https://github.com/facebookresearch/beanmachine/blob/main/CONTRIBUTING.md)** document.
- [ ] I have added tests to cover my changes.
- [ ] All new and existing tests passed.
- [ ] The title of my pull request is a short description of the requested changes.

Reviewed By: neerajprad

Differential Revision: D33124449

Pulled By: feynmanliang

fbshipit-source-id: 2a02cb073d87e7959e3c05ffc199e3a6bab8921b
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet