From d6cdc2f30d7725c120808e83cd19def65c6630a8 Mon Sep 17 00:00:00 2001 From: Jeremy Myers Date: Wed, 27 Sep 2023 12:42:40 -0700 Subject: [PATCH] 230 tutorial gcp opt (#253) * Draft tutorial * Created ttensor/reconstruct tutorial in master. * Moved to proper location. * Suppress output * Formatting. * Draft of GCP_OPT tutorial. Requires resolving issue #252. * Potential fix for ValueError in optimizers. * autoformat * Removed from main until completed in branch 248-tutorial-partial-reconstruction-of-a-tucker-tensor. * Restored from upstream main. * Auto formatting. * Commenting out cells that use matplotlib. * Draft tutorial * Moved to proper location. * Suppress output * Formatting. * Draft of GCP_OPT tutorial. Requires resolving issue #252. * Auto formatting. * Commenting out cells that use matplotlib. * Sptensor mask (#259) * SPTENSOR: Fix Mask, using tt_ismember_rows incorrectly * PYTTB_UTILS: Design away error: * Make it more obvious member rows might not match * LaTeX * Decrease the number of maximum iterations for notebook examples to try and avoid time out issues. * Closes #230 --------- Co-authored-by: Jeremy Myers Co-authored-by: Danny Dunlavy Co-authored-by: Nick <24689722+ntjohnson1@users.noreply.github.com> --- docs/source/tutorial/GCP_OPT.ipynb | 626 +++++++++++++++++++++++++++++ 1 file changed, 626 insertions(+) create mode 100644 docs/source/tutorial/GCP_OPT.ipynb diff --git a/docs/source/tutorial/GCP_OPT.ipynb b/docs/source/tutorial/GCP_OPT.ipynb new file mode 100644 index 0000000..875040e --- /dev/null +++ b/docs/source/tutorial/GCP_OPT.ipynb @@ -0,0 +1,626 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Generalized CP (GCP) Tensor Decomposition\n", + "\n", + "```\n", + "Copyright 2022 National Technology & Engineering Solutions of Sandia,\n", + "LLC (NTESS). Under the terms of Contract DE-NA0003525 with NTESS, the\n", + "U.S. Government retains certain rights in this software.\n", + "```" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "This document outlines usage and examples for the generalized CP (GCP) tensor decomposition implmented in `pyttb.gcp_opt`. GCP allows alternate objective functions besides sum of squared errors, which is the standard for CP. The code support both dense and sparse input tensors, but the sparse input tensors require randomized optimization methods.\n", + "\n", + "GCP is described in greater detail in the manuscripts:\n", + "* D. Hong, T. G. Kolda, J. A. Duersch, Generalized Canonical Polyadic Tensor Decomposition, SIAM Review, 62:133-163, 2020, https://doi.org/10.1137/18M1203626\n", + "* T. G. Kolda, D. Hong, Stochastic Gradients for Large-Scale Tensor Decomposition. SIAM J. Mathematics of Data Science, 2:1066-1095, 2020, https://doi.org/10.1137/19m1266265" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Basic Usage\n", + "The idea of GCP is to use alternative objective functions. As such, the most important thing to specify is the objective function.\n", + "\n", + "The command \n", + "```\n", + "M = ttb.gcp_opt(data=X, rank=rank, objective=Objectives., optimizer=)\n", + "``` \n", + "computes an estimate of the best rank-$R$ generalized CP (GCP) decomposition of the tensor `X` for the specified generalized loss function specified by `` solved with optimizer ``. The input `X` can be a tensor or sparse tensor. The result `M` is a Kruskal tensor. \n", + "\n", + "Predefined objective functions are:\n", + "\n", + "* `GAUSSIAN`: Gaussian distribution (see also `cp_als` and `cp_opt`)\n", + "* `BERNOULLI_ODDS`: Bernoulli distribution for binary data\n", + "* `BERNOULLI_LOGIT`: Bernoulli distribution for binary data with log link\n", + "* `POISSON`: Poisson distribution for count data (see also `cp_apr`)\n", + "* `POISSON_LOG`: Poisson distribution for count data with log link\n", + "* `RAYLEIGH`: Rayleigh distribution for nonnegative continuous data\n", + "* `GAMMA`: Gamma distribution for nonnegative continuous data\n", + "* `HUBER`: Similar to Gaussian but robust to outliers\n", + "* `NEGATIVE_BINOMIAL`: Models the number of trials required before we experience some number of failures. May be a useful alternative when Poisson is overdispersed.\n", + "* `BETA`: Generalizes exponential family of loss functions.\n", + "\n", + "Alternatively, a user can supply one's own objective function as a tuple of `function_handle`, `gradient_handle`, and `lower_bound`.\n", + "\n", + "Supported optimizers are:\n", + "* `LBFGSB`: bound-constrained limited-memory BFGS (L-BFGS-B). L-BFGS-B can only be used for dense tensors.\n", + "* `SGD`: Stochastic gradient descent (SGD). Can be used with both dense and sparse tensors.\n", + "* `Adagrad`: Adaptive gradients SGD method. Can be used with both dense and sparse tensors.\n", + "* `Adam`: Momentum-based SGD method. Can be used with both dense and sparse tensors.\n", + "\n", + "Each methods has parameters, which are described below." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Specifying Missing or Incomplete Data Using the Mask Option\n", + "If some entries of the tensor are unknown, the method can mask off that data during the fitting process. To do so, specify a *mask* tensor `W` that is the same size as the data tensor `X`. The mask tensor should be 1 if the entry in `X` is known and 0 otherwise. The call is \n", + "```\n", + "M = ttb.gcp_opt(data=X, rank=rank, objective=Objectives., optimizer=LBFGSB, mask=W)\n", + "```\n", + "Note: that `mask` isn't supported for stochastic solves." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Solver options\n", + "Defaults are listed in brackets {}\n", + "\n", + "Common options that can be passed to `pyttb.gcp_opt()` include:\n", + "\n", + "* `init`: Initial solution to the problem {\"random\"}.\n", + "* `printitn`: Controls verbosity of printing throughout the solve: print every n iterations; 0 for no printing.\n", + "* `sampler`: Class that defined sampling strategy for stochastic solves.\n", + "\n", + "## Other Options\n", + "In addition to the options above, the behavior of optimizers can be affected by constructing the optimizer with the following optional parameters.\n", + "\n", + "### Specifying L-BFGS-B Parameters\n", + "* `m`: {None}\n", + "* `factr`: Tolerance on the change on the objective value. Defaults to 1e7, which is multiplied by machine epsilon. {1e7}\n", + "* `pgtol`: Projected gradient tolerance, defaults to 1e-4 times total tensor size. It can sometimes be useful to increase or decrease `pgtol` depending on the objective function and size of the tensor. {None}\n", + "* `epsilon`: {None}\n", + "* `iprint`: {None}\n", + "* `disp`: {None}\n", + "* `maxfun`: {None}\n", + "* `maxiter`: {1000}\n", + "* `callback`: {None}\n", + "* `maxls`: {None}\n", + "\n", + "### Specifying SGD, Adagrad, and ADAM Parameters\n", + "There are a number of parameters that can be adjusted for SGD and ADAM.\n", + "\n", + "#### Stochastic Gradient\n", + "There are three different sampling methods for computing the stochastic gradient:\n", + "\n", + "* Uniform - Entries are selected uniformly at random. Default for dense tensors.\n", + "* Stratified - Zeros and nonzeros are sampled separately, which is recommended for sparse tensors. Default for sparse tensors.\n", + "* Semi-Stratified - Modification to stratified sampling that avoids rejection sampling for better efficiency at the cost of potentially higher variance.\n", + "\n", + "The options corresponding to these are as follows.\n", + "\n", + "* `gradient_sampler`: Type of sampling to use for stochastic gradient. Specified by setting `pyttb.gcp.samplers.`. Predefined options for `` are:\n", + " * `Samplers.UNIFORM`: default for dense.\n", + " * `Samplers.STRATIFIED`: default for sparse.\n", + " * `Samplers.SEMISTRATIFIED`\n", + "* `gradient_samples`: The number of samples for stochastic gradient can be specified as either an `int` or a `StratifiedCount` object. This should generally be $O(R\\sum_{k=1}^d n_k)$, where $n_k$ is the number of rows in the $k$-th mode and $R$ is the target rank. For the uniform sampler, only an `int` can be provided. For the stratified or semi-stratified sampler, this can be two numbers `a, b` provided as arguments to a `pyttb.gcp.samplers.StratifiedCount(a, b)` object. The first `a` is the number of nonzero samples and the second `b` is the number of zero samples. If only one number is specified, then this is used as the number for both nonzeros and zeros, and the total number of samples is 2x what is specified.\n", + "\n", + "#### Estimating the Function.\n", + "\n", + "We also use sampling to estimate the function value.\n", + "\n", + "* `function_sampler`: This can be any of the three samplers specified above or a custom function handle. The custom function handle is primarily useful in reusing the same sampled elements across different tests.\n", + "* `function_samples`: Number of samples to estimate function. As before, the number of samples for estimating the function can be specified as either an `int` or a `StratifiedCount` object. This should generally be somewhat large since we want this sample to generate a reliable estimate of the true function value.\n", + "\n", + "Creating the sampler takes two additional options:\n", + "* `max_iters`: Maximum number of iterations to normalize number of samples. {1000}\n", + "* `over_sample_rate`: Ratio of extra samples to take to account for bad draws. {1.1}\n", + "\n", + "There are some other options that are needed for SGD: the learning rate and a decrease schedule. Our schedule is very simple - we decrease the rate if there is no improvement in the approximate function value after an epoch. After a specified number of decreases (`max_fails`), we quit.\n", + "\n", + "* `rate`: Rate of descent, proportional to step size. {1e-3}\n", + "* `decay`: How much to decrease step size on failed epochs. {0.1}\n", + "* `max_fails`: How many failed epochs before terminating the solve. {1}\n", + "* `epoch_iters`: Number of steps to take per epoch. {1000}\n", + "* `f_est_tol`: Tolerance for function estimate changes to terminate solve. {-inf}\n", + "* `max_iters`: Maximum number of epochs. {1000}\n", + "* `printitn`: Controls verbosity of information during solve. {1}\n", + "\n", + "There are some options that are specific to ADAM and generally needn't change:\n", + "* `beta_1`: Adam-specific momentum parameter beta_1. {0.9}\n", + "* `beta_2`: Adam-specific momentum parameter beta_2. {0.999}\n", + "* `epsilon`: Adam-specific momentum parameter to avoid division by zero. {1e-8}" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "import sys\n", + "import pyttb as ttb\n", + "import numpy as np\n", + "from pyttb.pyttb_utils import tt_tenfun\n", + "\n", + "from pyttb.gcp.fg_setup import function_type, setup\n", + "from pyttb.gcp.handles import Objectives\n", + "from pyttb.gcp.optimizers import LBFGSB, SGD, Adagrad, Adam\n", + "from pyttb.gcp.samplers import GCPSampler" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Example with Gaussian distribution" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "X = ttb.tenones((2, 2))\n", + "X[0, 1] = 0.0\n", + "X[1, 0] = 0.0\n", + "rank = 2" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Run GCP-OPT" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%time\n", + "# Select Gaussian objective\n", + "objective = Objectives.GAUSSIAN\n", + "\n", + "# Select LBFGSB solver with 2 max iterations\n", + "optimizer = LBFGSB(maxiter=2, iprint=1)\n", + "\n", + "# Compute rank-2 GCP approximation to X with GCP-OPT\n", + "# Return result, initial guess, and runtime information\n", + "np.random.seed(0) # Creates consistent initial guess\n", + "result_lbfgs, initial_guess, info_lbfgs = ttb.gcp_opt(\n", + " data=X, rank=rank, objective=objective, optimizer=optimizer\n", + ")\n", + "\n", + "print(\n", + " f\"\\nFinal fit: {1 - np.linalg.norm((X-result_lbfgs.full()).double())/X.norm()} (for comparison to f(x) in CP-ALS)\\n\"\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Compare to CP-ALS, which should usually be faster" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%time\n", + "result_als, _, info_als = ttb.cp_als(\n", + " input_tensor=X, rank=rank, maxiters=2, init=initial_guess\n", + ")\n", + "print(\n", + " f\"\\nFinal fit: {1 - np.linalg.norm((X-result_als.full()).double())/X.norm()} (for comparison to f(x) in GCP-OPT)\\n\"\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Now let's try is with the ADAM functionality" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%time\n", + "# Select Gaussian objective\n", + "objective = Objectives.GAUSSIAN\n", + "\n", + "# Select LBFGSB solver with 2 max iterations\n", + "optimizer = Adam(max_iters=2)\n", + "\n", + "# Compute rank-2 GCP approximation to X with GCP-OPT\n", + "# Return result, initial guess, and runtime information\n", + "result_adam, _, info_adam = ttb.gcp_opt(\n", + " data=X,\n", + " rank=rank,\n", + " objective=objective,\n", + " optimizer=optimizer,\n", + " init=initial_guess,\n", + " printitn=1,\n", + ")\n", + "print(\n", + " f\"\\nFinal fit: {1 - np.linalg.norm((X-result_adam.full()).double())/X.norm()} (for comparison to f(x) in GCP-OPT & CP-ALS)\\n\"\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Inspect runtime information from each run\n", + "print(f\"Runtime information from `gcp_opt_lbfgs`: \\n{info_lbfgs}\")\n", + "print(f\"\\nRuntime information from `cp_als`: \\n{info_als}\")\n", + "print(f\"\\nRuntime information from `gcp_opt_adam`: \\n{info_adam}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Create an example Rayleigh tensor model and data instance.\n", + "Consider a tensor that is Rayleigh-distribued. This means its entries are all nonnegative. First, we generate such a tensor with low-rank structure." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "rng = np.random.default_rng(65)\n", + "rank = 3\n", + "shape = (50, 60, 70)\n", + "ndims = len(shape)\n", + "\n", + "# Create factor matrices that correspond to smooth sinusidal factors\n", + "U = []\n", + "for k in np.arange(ndims):\n", + " V = 1.1 + np.cos(\n", + " (2 * np.pi / shape[k] * np.arange(shape[k])[:, np.newaxis])\n", + " * np.arange(1, rank + 1)\n", + " )\n", + " U.append(V[:, rng.permutation(rank)])\n", + "\n", + "M_true = ttb.ktensor(U).normalize()\n", + "\n", + "\n", + "def make_rayleigh(X):\n", + " xvec = X.reshape((np.prod(X.shape), 1))\n", + " rayl = rng.rayleigh(size=xvec.shape)\n", + " yvec = rayl * xvec.data\n", + " Y = ttb.tensor(yvec, shape=X.shape)\n", + " return Y\n", + "\n", + "\n", + "X = make_rayleigh(M_true.full())" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Run GCP-OPT" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%time\n", + "# Select Rayleigh objective\n", + "objective = Objectives.RAYLEIGH\n", + "\n", + "# Select LBFGSB solver\n", + "optimizer = LBFGSB(maxiter=2, iprint=1)\n", + "\n", + "# Compute rank-3 GCP approximation to X with GCP-OPT\n", + "result_lbfgs, initial_guess, info_lbfgs = ttb.gcp_opt(\n", + " data=X, rank=rank, objective=objective, optimizer=optimizer, printitn=1\n", + ")\n", + "\n", + "print(f\"\\nFinal fit: {1 - np.linalg.norm((X-result_lbfgs.full()).double())/X.norm()}\\n\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Now let's try is with the scarce functionality - this leaves out all but 10% of the data!" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Select Rayleigh objective\n", + "objective = Objectives.RAYLEIGH\n", + "\n", + "# Select Adam solver\n", + "optimizer = Adam(max_iters=2)\n", + "\n", + "# Compute rank-3 GCP approximation to X with GCP-OPT\n", + "result_adam, initial_guess, info_adam = ttb.gcp_opt(\n", + " data=X, rank=rank, objective=objective, optimizer=optimizer, printitn=1\n", + ")\n", + "\n", + "print(f\"\\nFinal fit: {1 - np.linalg.norm((X-result_adam.full()).double())/X.norm()}\\n\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Boolean tensor.\n", + "The model will predict the odds of observing a 1. Recall that the odds related to the probability as follows. If $p$ is the probability and $r$ is the odds, then $r = p / (1-p)$. Higher odds indicates a higher probability of observing a one." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "rng = np.random.default_rng(7639)\n", + "rank = 3\n", + "shape = (60, 70, 80)\n", + "ndims = len(shape)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We assume that the underlying model tensor has factor matrices with only a few \"large\" entries in each column. The small entries should correspond to a low but nonzero entry of observing a 1, while the largest entries, if multiplied together, should correspond to a very high likelihood of observing a 1." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "probrange = np.array([0.01, 0.99])\n", + "oddsrange = probrange / (1 - probrange)\n", + "smallval = np.power(np.min(oddsrange) / rank, (1 / ndims))\n", + "largeval = np.power(np.max(oddsrange) / rank, (1 / ndims))\n", + "\n", + "A = []\n", + "for k in np.arange(ndims):\n", + " A.append(smallval * np.ones((shape[k], rank)))\n", + " nbig = 5\n", + " for j in np.arange(rank):\n", + " p = rng.permutation(shape[k])\n", + " A[k][p[: nbig - 1], j] = largeval\n", + "\n", + "M_true = ttb.ktensor(A)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Convert K-tensor to an observed tensor. Get the model values, which correspond to odds of observing a 1\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Mfull = M_true.double()\n", + "# Convert odds to probabilities\n", + "Mprobs = Mfull / (1 + Mfull)\n", + "# Flip a coin for each entry, with the probability of observing a one\n", + "# dictated by Mprobs\n", + "Xfull = 1.0 * (ttb.tenrand(shape) < Mprobs)\n", + "# Convert to sparse tensor, real-valued 0/1 tensor since it was constructed\n", + "# to be sparse\n", + "X = Xfull.to_sptensor()\n", + "print(f\"Proportion of nonzeros in X is {100*X.nnz / np.prod(shape):.2f}%\\n\");" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### TODO: Just for fun, let's visualize the distribution of the probabilities in the model tensor." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# import matplotlib.pyplot as plt\n", + "\n", + "# plt.hist(Mprobs.tolist()[0])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Call GCP_OPT on the full tensor" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%time\n", + "# Select Gaussian objective\n", + "objective = Objectives.BERNOULLI_ODDS\n", + "\n", + "# Select LBFGSB solver\n", + "optimizer = LBFGSB(iprint=1, maxiter=2)\n", + "\n", + "# Compute rank-3 GCP approximation to X with GCP-OPT\n", + "result_lbfgs, initial_guess, info_lbfgs = ttb.gcp_opt(\n", + " data=Xfull, rank=rank, objective=objective, optimizer=optimizer\n", + ")\n", + "\n", + "print(f\"\\nFinal fit: {1 - np.linalg.norm((X-result_lbfgs.full()).double())/X.norm()}\\n\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### GCP-OPT as sparse tensor" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%time\n", + "# Select Gaussian objective\n", + "objective = Objectives.BERNOULLI_ODDS\n", + "\n", + "# Select LBFGSB solver\n", + "optimizer = Adam(max_iters=2)\n", + "\n", + "# Compute rank-3 GCP approximation to X with GCP-OPT\n", + "result_adam, initial_guess, info_adam = ttb.gcp_opt(\n", + " data=X, rank=rank, objective=objective, optimizer=optimizer, printitn=1\n", + ")\n", + "\n", + "print(f\"\\nFinal fit: {1 - np.linalg.norm((X-result_adam.full()).double())/X.norm()}\\n\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Create and test a Poisson count tensor.\n", + "\n", + "We follow the general procedure outlined by E. C. Chi and T. G. Kolda, On Tensors, Sparsity, and Nonnegative Factorizations, arXiv:1112.2414 [math.NA], December 2011 (http://arxiv.org/abs/1112.2414)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Pick the size and rank\n", + "sz = (10, 8, 6)\n", + "R = 5\n", + "\n", + "# Generate factor matrices with a few large entries in each column\n", + "# this will be the basis of our solution.\n", + "np.random.seed(0) # Set seed for reproducibility\n", + "A = []\n", + "for n in range(len(sz)):\n", + " A.append(np.random.uniform(size=(sz[n], R)))\n", + " for r in range(R):\n", + " p = np.random.permutation(sz[n])\n", + " nbig = round((1 / R) * sz[n])\n", + " A[-1][p[0:nbig], r] *= 100\n", + "weights = np.random.uniform(size=(R,))\n", + "S = ttb.ktensor(A, weights)\n", + "S.normalize(sort=True, normtype=1)\n", + "\n", + "X = S.to_tensor()\n", + "X.data = np.floor(np.abs(X.data))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Loss function for Poisson negative log likelihood with identity link." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%time\n", + "# Select Gaussian objective\n", + "objective = Objectives.POISSON\n", + "\n", + "# Select LBFGSB solver\n", + "optimizer = Adam(max_iters=2)\n", + "\n", + "# Compute rank-3 GCP approximation to X with GCP-OPT\n", + "result_adam, initial_guess, info_adam = ttb.gcp_opt(\n", + " data=X, rank=rank, objective=objective, optimizer=optimizer, printitn=1\n", + ")\n", + "\n", + "print(f\"\\nFinal fit: {1 - np.linalg.norm((X-result_adam.full()).double())/X.norm()}\\n\")" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3.6.8 64-bit", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.9.6" + }, + "vscode": { + "interpreter": { + "hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6" + } + } + }, + "nbformat": 4, + "nbformat_minor": 4 +}