Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Demo example with weibull adstock failes with error: "'from' must be a finite number" #353

Closed
NikolayLutsyak opened this issue Mar 22, 2022 · 11 comments
Assignees

Comments

@NikolayLutsyak
Copy link

Project Robyn

Describe issue

Hi! Thank you for a great library!
I faced some problem while using Robyn and can't solve it by myself.
I am trying to run demo example with adstock = "weibull_cdf" and getting such error:

Error in {: task 1 failed - "'from' must be a finite number"

The thing is that with adstock = "geometric" everything works fine and I managed to fit model and obtain results. But with weibull_cdf I got this strange error. I tried to set cores = 1, but that didn't help. Full stdout/stderr:

'window_start' is adapted to the closest date contained in input data: 2016-11-21

'window_end' is adapted to the closest date contained in input data: 2018-08-20

'hyperparameters' are not provided yet. To include them, run robyn_inputs(InputCollect = InputCollect, hyperparameters = ...)

Total Observations: 208 (weeks)
Input Table Columns (12):
  Date: DATE
  Dependent: revenue [revenue]
  Paid Media: tv_S, ooh_S, print_S, facebook_I, search_clicks_P
  Paid Media Spend: tv_S, ooh_S, print_S, facebook_S, search_S
  Context: competitor_sales_B, events
  Organic: newsletter
  Prophet (Auto-generated): trend, season, holiday on DE
  Unused: 

Date Range: 2015-11-23:2019-11-11
Model Window: 2016-11-21:2018-08-20 (92 weeks)
With Calibration: FALSE
Custom parameters: None

Adstock: weibull_cdf
Hyper-parameters for media transformations:
  facebook_S_alphas: [0.5, 3]
  facebook_S_gammas: [0.3, 1]
  facebook_S_shapes: [1e-04, 2]
  facebook_S_scales: [0, 0.1]
  print_S_alphas: [0.5, 3]
  print_S_gammas: [0.3, 1]
  print_S_shapes: [1e-04, 2]
  print_S_scales: [0, 0.1]
  tv_S_alphas: [0.5, 3]
  tv_S_gammas: [0.3, 1]
  tv_S_shapes: [1e-04, 2]
  tv_S_scales: [0, 0.1]
  search_S_alphas: [0.5, 3]
  search_S_gammas: [0.3, 1]
  search_S_shapes: [1e-04, 2]
  search_S_scales: [0, 0.1]
  ooh_S_alphas: [0.5, 3]
  ooh_S_gammas: [0.3, 1]
  ooh_S_shapes: [1e-04, 2]
  ooh_S_scales: [0, 0.1]
  newsletter_alphas: [0.5, 3]
  newsletter_gammas: [0.3, 1]
  newsletter_shapes: [1e-04, 2]
  newsletter_scales: [0, 0.1]
Input data has 208 weeks in total: 2015-11-23 to 2019-11-11

Initial model is built on rolling window of 92 week: 2016-11-21 to 2018-08-20

Using weibull_cdf adstocking with 25 hyperparameters (25 to iterate + 0 fixed) on 1 cores

>>> Starting 5 trials with 2000 iterations each using TwoPointsDE nevergrad algorithm...

  Running trial 1 of 5

  |                                                                      |   0%
Warning message in mapply(function(x_val, x_pos) {:
“longer argument not a multiple of length of shorter”
Error in {: task 1 failed - "'from' must be a finite number"
Traceback:

1. robyn_run(InputCollect = InputCollect, cores = 1, iterations = 2000, 
 .     trials = 5, outputs = FALSE)
2. robyn_train(InputCollect, hyper_collect = hyps, cores, iterations, 
 .     trials, intercept_sign, nevergrad_algo, dt_hyper_fixed, add_penalty_factor, 
 .     refresh, seed, quiet)
3. robyn_mmm(InputCollect = InputCollect, hyper_collect = hyper_collect, 
 .     iterations = iterations, cores = cores, nevergrad_algo = nevergrad_algo, 
 .     intercept_sign = intercept_sign, add_penalty_factor = add_penalty_factor, 
 .     refresh = refresh, seed = seed + ngt, quiet = quiet)
4. system.time({
 .     for (lng in 1:iterNG) {
 .         nevergrad_hp <- list()
 .         nevergrad_hp_val <- list()
 .         hypParamSamList <- list()
 .         hypParamSamNG <- c()
 .         if (hyper_fixed == FALSE) {
 .             for (co in 1:iterPar) {
 .                 nevergrad_hp[[co]] <- optimizer$ask()
 .                 nevergrad_hp_val[[co]] <- nevergrad_hp[[co]]$value
 .                 for (hypNameLoop in hyper_bound_list_updated_name) {
 .                   index <- which(hypNameLoop == hyper_bound_list_updated_name)
 .                   channelBound <- unlist(hyper_bound_list_updated[hypNameLoop])
 .                   hyppar_value <- nevergrad_hp_val[[co]][index]
 .                   if (length(channelBound) > 1) {
 .                     hypParamSamNG[hypNameLoop] <- qunif(hyppar_value, 
 .                       min(channelBound), max(channelBound))
 .                   }
 .                   else {
 .                     hypParamSamNG[hypNameLoop] <- hyppar_value
 .                   }
 .                 }
 .                 hypParamSamList[[co]] <- transpose(data.table(hypParamSamNG))
 .             }
 .             hypParamSamNG <- rbindlist(hypParamSamList)
 .             hypParamSamNG <- setnames(hypParamSamNG, names(hypParamSamNG), 
 .                 hyper_bound_list_updated_name)
 .             if (hyper_count_fixed != 0) {
 .                 hypParamSamNG <- cbind(hypParamSamNG, dt_hyper_fixed_mod)
 .                 hypParamSamNG <- setcolorder(hypParamSamNG, hypParamSamName)
 .             }
 .         }
 .         else {
 .             hypParamSamNG <- setcolorder(dt_hyper_fixed_mod, 
 .                 hypParamSamName)
 .         }
 .         nrmse.collect <- c()
 .         decomp.rssd.collect <- c()
 .         best_mape <- Inf
 .         doparCollect <- suppressPackageStartupMessages(foreach(i = 1:iterPar) %dorng% 
 .             {
 .                 t1 <- Sys.time()
 .                 hypParamSam <- unlist(hypParamSamNG[i])
 .                 dt_modAdstocked <- dt_mod[, .SD, .SDcols = setdiff(names(dt_mod), 
 .                   "ds")]
 .                 mediaAdstocked <- list()
 .                 mediaVecCum <- list()
 .                 mediaSaturated <- list()
 .                 for (v in 1:length(all_media)) {
 .                   adstock <- check_adstock(adstock)
 .                   m <- dt_modAdstocked[, get(all_media[v])]
 .                   if (adstock == "geometric") {
 .                     theta <- hypParamSam[paste0(all_media[v], 
 .                       "_thetas")]
 .                     x_list <- adstock_geometric(x = m, theta = theta)
 .                   }
 .                   else if (adstock == "weibull_cdf") {
 .                     shape <- hypParamSam[paste0(all_media[v], 
 .                       "_shapes")]
 .                     scale <- hypParamSam[paste0(all_media[v], 
 .                       "_scales")]
 .                     x_list <- adstock_weibull(x = m, shape = shape, 
 .                       scale = scale, windlen = rollingWindowLength, 
 .                       type = "cdf")
 .                   }
 .                   else if (adstock == "weibull_pdf") {
 .                     shape <- hypParamSam[paste0(all_media[v], 
 .                       "_shapes")]
 .                     scale <- hypParamSam[paste0(all_media[v], 
 .                       "_scales")]
 .                     x_list <- adstock_weibull(x = m, shape = shape, 
 .                       scale = scale, windlen = rollingWindowLength, 
 .                       type = "pdf")
 .                   }
 .                   m_adstocked <- x_list$x_decayed
 .                   mediaAdstocked[[v]] <- m_adstocked
 .                   mediaVecCum[[v]] <- x_list$thetaVecCum
 .                   m_adstockedRollWind <- m_adstocked[rollingWindowStartWhich:rollingWindowEndWhich]
 .                   alpha <- hypParamSam[paste0(all_media[v], "_alphas")]
 .                   gamma <- hypParamSam[paste0(all_media[v], "_gammas")]
 .                   mediaSaturated[[v]] <- saturation_hill(m_adstockedRollWind, 
 .                     alpha = alpha, gamma = gamma)
 .                 }
 .                 names(mediaAdstocked) <- all_media
 .                 dt_modAdstocked[, `:=`((all_media), mediaAdstocked)]
 .                 dt_mediaVecCum <- data.table()[, `:=`((all_media), 
 .                   mediaVecCum)]
 .                 names(mediaSaturated) <- all_media
 .                 dt_modSaturated <- dt_modAdstocked[rollingWindowStartWhich:rollingWindowEndWhich]
 .                 dt_modSaturated[, `:=`((all_media), mediaSaturated)]
 .                 dt_train <- copy(dt_modSaturated)
 .                 y_train <- dt_train$dep_var
 .                 x_train <- model.matrix(dep_var ~ ., dt_train)[, 
 .                   -1]
 .                 dt_sign <- dt_modSaturated[, !"dep_var"]
 .                 x_sign <- c(prophet_signs, context_signs, paid_media_signs, 
 .                   organic_signs)
 .                 names(x_sign) <- c(prophet_vars, context_vars, 
 .                   paid_media_spends, organic_vars)
 .                 check_factor <- sapply(dt_sign, is.factor)
 .                 lower.limits <- upper.limits <- c()
 .                 for (s in 1:length(check_factor)) {
 .                   if (check_factor[s] == TRUE) {
 .                     level.n <- length(levels(unlist(dt_sign[, 
 .                       s, with = FALSE])))
 .                     if (level.n <= 1) {
 .                       stop("factor variables must have more than 1 level")
 .                     }
 .                     lower_vec <- if (x_sign[s] == "positive") {
 .                       rep(0, level.n - 1)
 .                     }
 .                     else {
 .                       rep(-Inf, level.n - 1)
 .                     }
 .                     upper_vec <- if (x_sign[s] == "negative") {
 .                       rep(0, level.n - 1)
 .                     }
 .                     else {
 .                       rep(Inf, level.n - 1)
 .                     }
 .                     lower.limits <- c(lower.limits, lower_vec)
 .                     upper.limits <- c(upper.limits, upper_vec)
 .                   }
 .                   else {
 .                     lower.limits <- c(lower.limits, ifelse(x_sign[s] == 
 .                       "positive", 0, -Inf))
 .                     upper.limits <- c(upper.limits, ifelse(x_sign[s] == 
 .                       "negative", 0, Inf))
 .                   }
 .                 }
 .                 lambda_hp <- unlist(hypParamSamNG$lambda[i])
 .                 if (hyper_fixed == FALSE) {
 .                   lambda_scaled <- lambda_min + (lambda_max - 
 .                     lambda_min) * lambda_hp
 .                 }
 .                 else {
 .                   lambda_scaled <- lambda_hp
 .                 }
 .                 if (add_penalty_factor) {
 .                   penalty.factor <- unlist(as.data.frame(hypParamSamNG)[i, 
 .                     grepl("penalty_", names(hypParamSamNG))])
 .                 }
 .                 else {
 .                   penalty.factor <- rep(1, ncol(x_train))
 .                 }
 .                 glm_mod <- glmnet(x_train, y_train, family = "gaussian", 
 .                   alpha = 0, lambda = lambda_scaled, lower.limits = lower.limits, 
 .                   upper.limits = upper.limits, type.measure = "mse", 
 .                   penalty.factor = penalty.factor)
 .                 mod_out <- model_refit(x_train, y_train, lambda = lambda_scaled, 
 .                   lower.limits, upper.limits, intercept_sign)
 .                 decompCollect <- model_decomp(coefs = mod_out$coefs, 
 .                   dt_modSaturated = dt_modSaturated, x = x_train, 
 .                   y_pred = mod_out$y_pred, i = i, dt_modRollWind = dt_modRollWind, 
 .                   refreshAddedStart = refreshAddedStart)
 .                 nrmse <- mod_out$nrmse_train
 .                 mape <- 0
 .                 df.int <- mod_out$df.int
 .                 if (!is.null(calibration_input)) {
 .                   liftCollect <- calibrate_mmm(decompCollect = decompCollect, 
 .                     calibration_input = calibration_input, paid_media_spends = paid_media_spends, 
 .                     dayInterval = InputCollect$dayInterval)
 .                   mape <- liftCollect[, mean(mape_lift)]
 .                 }
 .                 dt_decompSpendDist <- decompCollect$xDecompAgg[rn %in% 
 .                   paid_media_spends, .(rn, xDecompAgg, xDecompPerc, 
 .                   xDecompMeanNon0Perc, xDecompMeanNon0, xDecompPercRF, 
 .                   xDecompMeanNon0PercRF, xDecompMeanNon0RF)]
 .                 dt_decompSpendDist <- dt_decompSpendDist[dt_spendShare[, 
 .                   .(rn, spend_share, spend_share_refresh, mean_spend, 
 .                     total_spend)], on = "rn"]
 .                 dt_decompSpendDist[, `:=`(effect_share = xDecompPerc/sum(xDecompPerc), 
 .                   effect_share_refresh = xDecompPercRF/sum(xDecompPercRF))]
 .                 decompCollect$xDecompAgg[dt_decompSpendDist[, 
 .                   .(rn, spend_share_refresh, effect_share_refresh)], 
 .                   `:=`(spend_share_refresh = i.spend_share_refresh, 
 .                     effect_share_refresh = i.effect_share_refresh), 
 .                   on = "rn"]
 .                 if (!refresh) {
 .                   decomp.rssd <- dt_decompSpendDist[, sqrt(sum((effect_share - 
 .                     spend_share)^2))]
 .                 }
 .                 else {
 .                   dt_decompRF <- decompCollect$xDecompAgg[, .(rn, 
 .                     decomp_perc = xDecompPerc)][xDecompAggPrev[, 
 .                     .(rn, decomp_perc_prev = xDecompPerc)], on = "rn"]
 .                   decomp.rssd.nonmedia <- dt_decompRF[!(rn %in% 
 .                     paid_media_spends), sqrt(mean((decomp_perc - 
 .                     decomp_perc_prev)^2))]
 .                   decomp.rssd.media <- dt_decompSpendDist[, sqrt(mean((effect_share_refresh - 
 .                     spend_share_refresh)^2))]
 .                   decomp.rssd <- decomp.rssd.media + decomp.rssd.nonmedia/(1 - 
 .                     refresh_steps/rollingWindowLength)
 .                 }
 .                 if (is.nan(decomp.rssd)) {
 .                   decomp.rssd <- Inf
 .                   dt_decompSpendDist[, `:=`(effect_share, 0)]
 .                 }
 .                 resultHypParam <- data.table()[, `:=`((hypParamSamName), 
 .                   lapply(hypParamSam[1:length(hypParamSamName)], 
 .                     function(x) x))]
 .                 resultCollect <- list(resultHypParam = resultHypParam[, 
 .                   `:=`(mape = mape, nrmse = nrmse, decomp.rssd = decomp.rssd, 
 .                     rsq_train = mod_out$rsq_train, lambda = lambda_scaled, 
 .                     lambda_hp = lambda_hp, lambda_max = lambda_max, 
 .                     pos = prod(decompCollect$xDecompAgg$pos), 
 .                     Elapsed = as.numeric(difftime(Sys.time(), 
 .                       t1, units = "secs")), ElapsedAccum = as.numeric(difftime(Sys.time(), 
 .                       t0, units = "secs")), iterPar = i, iterNG = lng, 
 .                     df.int = df.int)], xDecompVec = if (hyper_fixed) {
 .                   decompCollect$xDecompVec[, `:=`(intercept = decompCollect$xDecompAgg[rn == 
 .                     "(Intercept)", xDecompAgg], mape = mape, 
 .                     nrmse = nrmse, decomp.rssd = decomp.rssd, 
 .                     rsq_train = mod_out$rsq_train, lambda = lambda_scaled, 
 .                     lambda_hp = lambda_hp, lambda_max = lambda_max, 
 .                     iterPar = i, iterNG = lng, df.int = df.int)]
 .                 } else {
 .                   NULL
 .                 }, xDecompAgg = decompCollect$xDecompAgg[, `:=`(mape = mape, 
 .                   nrmse = nrmse, decomp.rssd = decomp.rssd, rsq_train = mod_out$rsq_train, 
 .                   lambda = lambda_scaled, lambda_hp = lambda_hp, 
 .                   lambda_max = lambda_max, iterPar = i, iterNG = lng, 
 .                   df.int = df.int)], liftCalibration = if (!is.null(calibration_input)) {
 .                   liftCollect[, `:=`(mape = mape, nrmse = nrmse, 
 .                     decomp.rssd = decomp.rssd, rsq_train = mod_out$rsq_train, 
 .                     lambda = lambda_scaled, lambda_hp = lambda_hp, 
 .                     lambda_max = lambda_max, iterPar = i, iterNG = lng)]
 .                 } else {
 .                   NULL
 .                 }, decompSpendDist = dt_decompSpendDist[, `:=`(mape = mape, 
 .                   nrmse = nrmse, decomp.rssd = decomp.rssd, rsq_train = mod_out$rsq_train, 
 .                   lambda = lambda_scaled, lambda_hp = lambda_hp, 
 .                   lambda_max = lambda_max, iterPar = i, iterNG = lng, 
 .                   df.int = df.int)], mape.lift = mape, nrmse = nrmse, 
 .                   decomp.rssd = decomp.rssd, lambda = lambda_scaled, 
 .                   lambda_hp = lambda_hp, lambda_max = lambda_max, 
 .                   lambda_min_ratio = lambda_min_ratio, iterPar = i, 
 .                   iterNG = lng, df.int = df.int)
 .                 best_mape <- min(best_mape, mape)
 .                 if (cnt == iterTotal) {
 .                   print(" === ")
 .                   print(paste0("Optimizer_name: ", optimizer_name, 
 .                     ";  Total_iterations: ", cnt, ";   best_mape: ", 
 .                     best_mape))
 .                 }
 .                 return(resultCollect)
 .             })
 .         nrmse.collect <- sapply(doparCollect, function(x) x$nrmse)
 .         decomp.rssd.collect <- sapply(doparCollect, function(x) x$decomp.rssd)
 .         mape.lift.collect <- sapply(doparCollect, function(x) x$mape.lift)
 .         if (!hyper_fixed) {
 .             if (is.null(calibration_input)) {
 .                 for (co in 1:iterPar) {
 .                   optimizer$tell(nevergrad_hp[[co]], tuple(nrmse.collect[co], 
 .                     decomp.rssd.collect[co]))
 .                 }
 .             }
 .             else {
 .                 for (co in 1:iterPar) {
 .                   optimizer$tell(nevergrad_hp[[co]], tuple(nrmse.collect[co], 
 .                     decomp.rssd.collect[co], mape.lift.collect[co]))
 .                 }
 .             }
 .         }
 .         resultCollectNG[[lng]] <- doparCollect
 .         if (!quiet) {
 .             cnt <- cnt + iterPar
 .             if (!hyper_fixed) 
 .                 setTxtProgressBar(pb, cnt)
 .         }
 .     }
 . })
5. suppressPackageStartupMessages(foreach(i = 1:iterPar) %dorng% 
 .     {
 .         t1 <- Sys.time()
 .         hypParamSam <- unlist(hypParamSamNG[i])
 .         dt_modAdstocked <- dt_mod[, .SD, .SDcols = setdiff(names(dt_mod), 
 .             "ds")]
 .         mediaAdstocked <- list()
 .         mediaVecCum <- list()
 .         mediaSaturated <- list()
 .         for (v in 1:length(all_media)) {
 .             adstock <- check_adstock(adstock)
 .             m <- dt_modAdstocked[, get(all_media[v])]
 .             if (adstock == "geometric") {
 .                 theta <- hypParamSam[paste0(all_media[v], "_thetas")]
 .                 x_list <- adstock_geometric(x = m, theta = theta)
 .             }
 .             else if (adstock == "weibull_cdf") {
 .                 shape <- hypParamSam[paste0(all_media[v], "_shapes")]
 .                 scale <- hypParamSam[paste0(all_media[v], "_scales")]
 .                 x_list <- adstock_weibull(x = m, shape = shape, 
 .                   scale = scale, windlen = rollingWindowLength, 
 .                   type = "cdf")
 .             }
 .             else if (adstock == "weibull_pdf") {
 .                 shape <- hypParamSam[paste0(all_media[v], "_shapes")]
 .                 scale <- hypParamSam[paste0(all_media[v], "_scales")]
 .                 x_list <- adstock_weibull(x = m, shape = shape, 
 .                   scale = scale, windlen = rollingWindowLength, 
 .                   type = "pdf")
 .             }
 .             m_adstocked <- x_list$x_decayed
 .             mediaAdstocked[[v]] <- m_adstocked
 .             mediaVecCum[[v]] <- x_list$thetaVecCum
 .             m_adstockedRollWind <- m_adstocked[rollingWindowStartWhich:rollingWindowEndWhich]
 .             alpha <- hypParamSam[paste0(all_media[v], "_alphas")]
 .             gamma <- hypParamSam[paste0(all_media[v], "_gammas")]
 .             mediaSaturated[[v]] <- saturation_hill(m_adstockedRollWind, 
 .                 alpha = alpha, gamma = gamma)
 .         }
 .         names(mediaAdstocked) <- all_media
 .         dt_modAdstocked[, `:=`((all_media), mediaAdstocked)]
 .         dt_mediaVecCum <- data.table()[, `:=`((all_media), mediaVecCum)]
 .         names(mediaSaturated) <- all_media
 .         dt_modSaturated <- dt_modAdstocked[rollingWindowStartWhich:rollingWindowEndWhich]
 .         dt_modSaturated[, `:=`((all_media), mediaSaturated)]
 .         dt_train <- copy(dt_modSaturated)
 .         y_train <- dt_train$dep_var
 .         x_train <- model.matrix(dep_var ~ ., dt_train)[, -1]
 .         dt_sign <- dt_modSaturated[, !"dep_var"]
 .         x_sign <- c(prophet_signs, context_signs, paid_media_signs, 
 .             organic_signs)
 .         names(x_sign) <- c(prophet_vars, context_vars, paid_media_spends, 
 .             organic_vars)
 .         check_factor <- sapply(dt_sign, is.factor)
 .         lower.limits <- upper.limits <- c()
 .         for (s in 1:length(check_factor)) {
 .             if (check_factor[s] == TRUE) {
 .                 level.n <- length(levels(unlist(dt_sign[, s, 
 .                   with = FALSE])))
 .                 if (level.n <= 1) {
 .                   stop("factor variables must have more than 1 level")
 .                 }
 .                 lower_vec <- if (x_sign[s] == "positive") {
 .                   rep(0, level.n - 1)
 .                 }
 .                 else {
 .                   rep(-Inf, level.n - 1)
 .                 }
 .                 upper_vec <- if (x_sign[s] == "negative") {
 .                   rep(0, level.n - 1)
 .                 }
 .                 else {
 .                   rep(Inf, level.n - 1)
 .                 }
 .                 lower.limits <- c(lower.limits, lower_vec)
 .                 upper.limits <- c(upper.limits, upper_vec)
 .             }
 .             else {
 .                 lower.limits <- c(lower.limits, ifelse(x_sign[s] == 
 .                   "positive", 0, -Inf))
 .                 upper.limits <- c(upper.limits, ifelse(x_sign[s] == 
 .                   "negative", 0, Inf))
 .             }
 .         }
 .         lambda_hp <- unlist(hypParamSamNG$lambda[i])
 .         if (hyper_fixed == FALSE) {
 .             lambda_scaled <- lambda_min + (lambda_max - lambda_min) * 
 .                 lambda_hp
 .         }
 .         else {
 .             lambda_scaled <- lambda_hp
 .         }
 .         if (add_penalty_factor) {
 .             penalty.factor <- unlist(as.data.frame(hypParamSamNG)[i, 
 .                 grepl("penalty_", names(hypParamSamNG))])
 .         }
 .         else {
 .             penalty.factor <- rep(1, ncol(x_train))
 .         }
 .         glm_mod <- glmnet(x_train, y_train, family = "gaussian", 
 .             alpha = 0, lambda = lambda_scaled, lower.limits = lower.limits, 
 .             upper.limits = upper.limits, type.measure = "mse", 
 .             penalty.factor = penalty.factor)
 .         mod_out <- model_refit(x_train, y_train, lambda = lambda_scaled, 
 .             lower.limits, upper.limits, intercept_sign)
 .         decompCollect <- model_decomp(coefs = mod_out$coefs, 
 .             dt_modSaturated = dt_modSaturated, x = x_train, y_pred = mod_out$y_pred, 
 .             i = i, dt_modRollWind = dt_modRollWind, refreshAddedStart = refreshAddedStart)
 .         nrmse <- mod_out$nrmse_train
 .         mape <- 0
 .         df.int <- mod_out$df.int
 .         if (!is.null(calibration_input)) {
 .             liftCollect <- calibrate_mmm(decompCollect = decompCollect, 
 .                 calibration_input = calibration_input, paid_media_spends = paid_media_spends, 
 .                 dayInterval = InputCollect$dayInterval)
 .             mape <- liftCollect[, mean(mape_lift)]
 .         }
 .         dt_decompSpendDist <- decompCollect$xDecompAgg[rn %in% 
 .             paid_media_spends, .(rn, xDecompAgg, xDecompPerc, 
 .             xDecompMeanNon0Perc, xDecompMeanNon0, xDecompPercRF, 
 .             xDecompMeanNon0PercRF, xDecompMeanNon0RF)]
 .         dt_decompSpendDist <- dt_decompSpendDist[dt_spendShare[, 
 .             .(rn, spend_share, spend_share_refresh, mean_spend, 
 .                 total_spend)], on = "rn"]
 .         dt_decompSpendDist[, `:=`(effect_share = xDecompPerc/sum(xDecompPerc), 
 .             effect_share_refresh = xDecompPercRF/sum(xDecompPercRF))]
 .         decompCollect$xDecompAgg[dt_decompSpendDist[, .(rn, spend_share_refresh, 
 .             effect_share_refresh)], `:=`(spend_share_refresh = i.spend_share_refresh, 
 .             effect_share_refresh = i.effect_share_refresh), on = "rn"]
 .         if (!refresh) {
 .             decomp.rssd <- dt_decompSpendDist[, sqrt(sum((effect_share - 
 .                 spend_share)^2))]
 .         }
 .         else {
 .             dt_decompRF <- decompCollect$xDecompAgg[, .(rn, decomp_perc = xDecompPerc)][xDecompAggPrev[, 
 .                 .(rn, decomp_perc_prev = xDecompPerc)], on = "rn"]
 .             decomp.rssd.nonmedia <- dt_decompRF[!(rn %in% paid_media_spends), 
 .                 sqrt(mean((decomp_perc - decomp_perc_prev)^2))]
 .             decomp.rssd.media <- dt_decompSpendDist[, sqrt(mean((effect_share_refresh - 
 .                 spend_share_refresh)^2))]
 .             decomp.rssd <- decomp.rssd.media + decomp.rssd.nonmedia/(1 - 
 .                 refresh_steps/rollingWindowLength)
 .         }
 .         if (is.nan(decomp.rssd)) {
 .             decomp.rssd <- Inf
 .             dt_decompSpendDist[, `:=`(effect_share, 0)]
 .         }
 .         resultHypParam <- data.table()[, `:=`((hypParamSamName), 
 .             lapply(hypParamSam[1:length(hypParamSamName)], function(x) x))]
 .         resultCollect <- list(resultHypParam = resultHypParam[, 
 .             `:=`(mape = mape, nrmse = nrmse, decomp.rssd = decomp.rssd, 
 .                 rsq_train = mod_out$rsq_train, lambda = lambda_scaled, 
 .                 lambda_hp = lambda_hp, lambda_max = lambda_max, 
 .                 pos = prod(decompCollect$xDecompAgg$pos), Elapsed = as.numeric(difftime(Sys.time(), 
 .                   t1, units = "secs")), ElapsedAccum = as.numeric(difftime(Sys.time(), 
 .                   t0, units = "secs")), iterPar = i, iterNG = lng, 
 .                 df.int = df.int)], xDecompVec = if (hyper_fixed) {
 .             decompCollect$xDecompVec[, `:=`(intercept = decompCollect$xDecompAgg[rn == 
 .                 "(Intercept)", xDecompAgg], mape = mape, nrmse = nrmse, 
 .                 decomp.rssd = decomp.rssd, rsq_train = mod_out$rsq_train, 
 .                 lambda = lambda_scaled, lambda_hp = lambda_hp, 
 .                 lambda_max = lambda_max, iterPar = i, iterNG = lng, 
 .                 df.int = df.int)]
 .         } else {
 .             NULL
 .         }, xDecompAgg = decompCollect$xDecompAgg[, `:=`(mape = mape, 
 .             nrmse = nrmse, decomp.rssd = decomp.rssd, rsq_train = mod_out$rsq_train, 
 .             lambda = lambda_scaled, lambda_hp = lambda_hp, lambda_max = lambda_max, 
 .             iterPar = i, iterNG = lng, df.int = df.int)], liftCalibration = if (!is.null(calibration_input)) {
 .             liftCollect[, `:=`(mape = mape, nrmse = nrmse, decomp.rssd = decomp.rssd, 
 .                 rsq_train = mod_out$rsq_train, lambda = lambda_scaled, 
 .                 lambda_hp = lambda_hp, lambda_max = lambda_max, 
 .                 iterPar = i, iterNG = lng)]
 .         } else {
 .             NULL
 .         }, decompSpendDist = dt_decompSpendDist[, `:=`(mape = mape, 
 .             nrmse = nrmse, decomp.rssd = decomp.rssd, rsq_train = mod_out$rsq_train, 
 .             lambda = lambda_scaled, lambda_hp = lambda_hp, lambda_max = lambda_max, 
 .             iterPar = i, iterNG = lng, df.int = df.int)], mape.lift = mape, 
 .             nrmse = nrmse, decomp.rssd = decomp.rssd, lambda = lambda_scaled, 
 .             lambda_hp = lambda_hp, lambda_max = lambda_max, lambda_min_ratio = lambda_min_ratio, 
 .             iterPar = i, iterNG = lng, df.int = df.int)
 .         best_mape <- min(best_mape, mape)
 .         if (cnt == iterTotal) {
 .             print(" === ")
 .             print(paste0("Optimizer_name: ", optimizer_name, 
 .                 ";  Total_iterations: ", cnt, ";   best_mape: ", 
 .                 best_mape))
 .         }
 .         return(resultCollect)
 .     })
6. withCallingHandlers(expr, packageStartupMessage = function(c) tryInvokeRestart("muffleMessage"))
7. foreach(i = 1:iterPar) %dorng% {
 .     t1 <- Sys.time()
 .     hypParamSam <- unlist(hypParamSamNG[i])
 .     dt_modAdstocked <- dt_mod[, .SD, .SDcols = setdiff(names(dt_mod), 
 .         "ds")]
 .     mediaAdstocked <- list()
 .     mediaVecCum <- list()
 .     mediaSaturated <- list()
 .     for (v in 1:length(all_media)) {
 .         adstock <- check_adstock(adstock)
 .         m <- dt_modAdstocked[, get(all_media[v])]
 .         if (adstock == "geometric") {
 .             theta <- hypParamSam[paste0(all_media[v], "_thetas")]
 .             x_list <- adstock_geometric(x = m, theta = theta)
 .         }
 .         else if (adstock == "weibull_cdf") {
 .             shape <- hypParamSam[paste0(all_media[v], "_shapes")]
 .             scale <- hypParamSam[paste0(all_media[v], "_scales")]
 .             x_list <- adstock_weibull(x = m, shape = shape, scale = scale, 
 .                 windlen = rollingWindowLength, type = "cdf")
 .         }
 .         else if (adstock == "weibull_pdf") {
 .             shape <- hypParamSam[paste0(all_media[v], "_shapes")]
 .             scale <- hypParamSam[paste0(all_media[v], "_scales")]
 .             x_list <- adstock_weibull(x = m, shape = shape, scale = scale, 
 .                 windlen = rollingWindowLength, type = "pdf")
 .         }
 .         m_adstocked <- x_list$x_decayed
 .         mediaAdstocked[[v]] <- m_adstocked
 .         mediaVecCum[[v]] <- x_list$thetaVecCum
 .         m_adstockedRollWind <- m_adstocked[rollingWindowStartWhich:rollingWindowEndWhich]
 .         alpha <- hypParamSam[paste0(all_media[v], "_alphas")]
 .         gamma <- hypParamSam[paste0(all_media[v], "_gammas")]
 .         mediaSaturated[[v]] <- saturation_hill(m_adstockedRollWind, 
 .             alpha = alpha, gamma = gamma)
 .     }
 .     names(mediaAdstocked) <- all_media
 .     dt_modAdstocked[, `:=`((all_media), mediaAdstocked)]
 .     dt_mediaVecCum <- data.table()[, `:=`((all_media), mediaVecCum)]
 .     names(mediaSaturated) <- all_media
 .     dt_modSaturated <- dt_modAdstocked[rollingWindowStartWhich:rollingWindowEndWhich]
 .     dt_modSaturated[, `:=`((all_media), mediaSaturated)]
 .     dt_train <- copy(dt_modSaturated)
 .     y_train <- dt_train$dep_var
 .     x_train <- model.matrix(dep_var ~ ., dt_train)[, -1]
 .     dt_sign <- dt_modSaturated[, !"dep_var"]
 .     x_sign <- c(prophet_signs, context_signs, paid_media_signs, 
 .         organic_signs)
 .     names(x_sign) <- c(prophet_vars, context_vars, paid_media_spends, 
 .         organic_vars)
 .     check_factor <- sapply(dt_sign, is.factor)
 .     lower.limits <- upper.limits <- c()
 .     for (s in 1:length(check_factor)) {
 .         if (check_factor[s] == TRUE) {
 .             level.n <- length(levels(unlist(dt_sign[, s, with = FALSE])))
 .             if (level.n <= 1) {
 .                 stop("factor variables must have more than 1 level")
 .             }
 .             lower_vec <- if (x_sign[s] == "positive") {
 .                 rep(0, level.n - 1)
 .             }
 .             else {
 .                 rep(-Inf, level.n - 1)
 .             }
 .             upper_vec <- if (x_sign[s] == "negative") {
 .                 rep(0, level.n - 1)
 .             }
 .             else {
 .                 rep(Inf, level.n - 1)
 .             }
 .             lower.limits <- c(lower.limits, lower_vec)
 .             upper.limits <- c(upper.limits, upper_vec)
 .         }
 .         else {
 .             lower.limits <- c(lower.limits, ifelse(x_sign[s] == 
 .                 "positive", 0, -Inf))
 .             upper.limits <- c(upper.limits, ifelse(x_sign[s] == 
 .                 "negative", 0, Inf))
 .         }
 .     }
 .     lambda_hp <- unlist(hypParamSamNG$lambda[i])
 .     if (hyper_fixed == FALSE) {
 .         lambda_scaled <- lambda_min + (lambda_max - lambda_min) * 
 .             lambda_hp
 .     }
 .     else {
 .         lambda_scaled <- lambda_hp
 .     }
 .     if (add_penalty_factor) {
 .         penalty.factor <- unlist(as.data.frame(hypParamSamNG)[i, 
 .             grepl("penalty_", names(hypParamSamNG))])
 .     }
 .     else {
 .         penalty.factor <- rep(1, ncol(x_train))
 .     }
 .     glm_mod <- glmnet(x_train, y_train, family = "gaussian", 
 .         alpha = 0, lambda = lambda_scaled, lower.limits = lower.limits, 
 .         upper.limits = upper.limits, type.measure = "mse", penalty.factor = penalty.factor)
 .     mod_out <- model_refit(x_train, y_train, lambda = lambda_scaled, 
 .         lower.limits, upper.limits, intercept_sign)
 .     decompCollect <- model_decomp(coefs = mod_out$coefs, dt_modSaturated = dt_modSaturated, 
 .         x = x_train, y_pred = mod_out$y_pred, i = i, dt_modRollWind = dt_modRollWind, 
 .         refreshAddedStart = refreshAddedStart)
 .     nrmse <- mod_out$nrmse_train
 .     mape <- 0
 .     df.int <- mod_out$df.int
 .     if (!is.null(calibration_input)) {
 .         liftCollect <- calibrate_mmm(decompCollect = decompCollect, 
 .             calibration_input = calibration_input, paid_media_spends = paid_media_spends, 
 .             dayInterval = InputCollect$dayInterval)
 .         mape <- liftCollect[, mean(mape_lift)]
 .     }
 .     dt_decompSpendDist <- decompCollect$xDecompAgg[rn %in% paid_media_spends, 
 .         .(rn, xDecompAgg, xDecompPerc, xDecompMeanNon0Perc, xDecompMeanNon0, 
 .             xDecompPercRF, xDecompMeanNon0PercRF, xDecompMeanNon0RF)]
 .     dt_decompSpendDist <- dt_decompSpendDist[dt_spendShare[, 
 .         .(rn, spend_share, spend_share_refresh, mean_spend, total_spend)], 
 .         on = "rn"]
 .     dt_decompSpendDist[, `:=`(effect_share = xDecompPerc/sum(xDecompPerc), 
 .         effect_share_refresh = xDecompPercRF/sum(xDecompPercRF))]
 .     decompCollect$xDecompAgg[dt_decompSpendDist[, .(rn, spend_share_refresh, 
 .         effect_share_refresh)], `:=`(spend_share_refresh = i.spend_share_refresh, 
 .         effect_share_refresh = i.effect_share_refresh), on = "rn"]
 .     if (!refresh) {
 .         decomp.rssd <- dt_decompSpendDist[, sqrt(sum((effect_share - 
 .             spend_share)^2))]
 .     }
 .     else {
 .         dt_decompRF <- decompCollect$xDecompAgg[, .(rn, decomp_perc = xDecompPerc)][xDecompAggPrev[, 
 .             .(rn, decomp_perc_prev = xDecompPerc)], on = "rn"]
 .         decomp.rssd.nonmedia <- dt_decompRF[!(rn %in% paid_media_spends), 
 .             sqrt(mean((decomp_perc - decomp_perc_prev)^2))]
 .         decomp.rssd.media <- dt_decompSpendDist[, sqrt(mean((effect_share_refresh - 
 .             spend_share_refresh)^2))]
 .         decomp.rssd <- decomp.rssd.media + decomp.rssd.nonmedia/(1 - 
 .             refresh_steps/rollingWindowLength)
 .     }
 .     if (is.nan(decomp.rssd)) {
 .         decomp.rssd <- Inf
 .         dt_decompSpendDist[, `:=`(effect_share, 0)]
 .     }
 .     resultHypParam <- data.table()[, `:=`((hypParamSamName), 
 .         lapply(hypParamSam[1:length(hypParamSamName)], function(x) x))]
 .     resultCollect <- list(resultHypParam = resultHypParam[, `:=`(mape = mape, 
 .         nrmse = nrmse, decomp.rssd = decomp.rssd, rsq_train = mod_out$rsq_train, 
 .         lambda = lambda_scaled, lambda_hp = lambda_hp, lambda_max = lambda_max, 
 .         pos = prod(decompCollect$xDecompAgg$pos), Elapsed = as.numeric(difftime(Sys.time(), 
 .             t1, units = "secs")), ElapsedAccum = as.numeric(difftime(Sys.time(), 
 .             t0, units = "secs")), iterPar = i, iterNG = lng, 
 .         df.int = df.int)], xDecompVec = if (hyper_fixed) {
 .         decompCollect$xDecompVec[, `:=`(intercept = decompCollect$xDecompAgg[rn == 
 .             "(Intercept)", xDecompAgg], mape = mape, nrmse = nrmse, 
 .             decomp.rssd = decomp.rssd, rsq_train = mod_out$rsq_train, 
 .             lambda = lambda_scaled, lambda_hp = lambda_hp, lambda_max = lambda_max, 
 .             iterPar = i, iterNG = lng, df.int = df.int)]
 .     } else {
 .         NULL
 .     }, xDecompAgg = decompCollect$xDecompAgg[, `:=`(mape = mape, 
 .         nrmse = nrmse, decomp.rssd = decomp.rssd, rsq_train = mod_out$rsq_train, 
 .         lambda = lambda_scaled, lambda_hp = lambda_hp, lambda_max = lambda_max, 
 .         iterPar = i, iterNG = lng, df.int = df.int)], liftCalibration = if (!is.null(calibration_input)) {
 .         liftCollect[, `:=`(mape = mape, nrmse = nrmse, decomp.rssd = decomp.rssd, 
 .             rsq_train = mod_out$rsq_train, lambda = lambda_scaled, 
 .             lambda_hp = lambda_hp, lambda_max = lambda_max, iterPar = i, 
 .             iterNG = lng)]
 .     } else {
 .         NULL
 .     }, decompSpendDist = dt_decompSpendDist[, `:=`(mape = mape, 
 .         nrmse = nrmse, decomp.rssd = decomp.rssd, rsq_train = mod_out$rsq_train, 
 .         lambda = lambda_scaled, lambda_hp = lambda_hp, lambda_max = lambda_max, 
 .         iterPar = i, iterNG = lng, df.int = df.int)], mape.lift = mape, 
 .         nrmse = nrmse, decomp.rssd = decomp.rssd, lambda = lambda_scaled, 
 .         lambda_hp = lambda_hp, lambda_max = lambda_max, lambda_min_ratio = lambda_min_ratio, 
 .         iterPar = i, iterNG = lng, df.int = df.int)
 .     best_mape <- min(best_mape, mape)
 .     if (cnt == iterTotal) {
 .         print(" === ")
 .         print(paste0("Optimizer_name: ", optimizer_name, ";  Total_iterations: ", 
 .             cnt, ";   best_mape: ", best_mape))
 .     }
 .     return(resultCollect)
 . }
8. do.call("%dopar%", list(obj, ex), envir = parent.frame())
9. structure(list(args = (1:iterPar)(.doRNG.stream = list(c(10407L, 
 . 1459713542L, 892326095L, -709642044L, -99555083L, 72613170L, 
 . -1202762997L))), argnames = c("i", ".doRNG.stream"), evalenv = <environment>, 
 .     specified = character(0), combineInfo = list(fun = function (a, 
 .         ...) 
 .     c(a, list(...)), in.order = TRUE, has.init = TRUE, init = list(), 
 .         final = NULL, multi.combine = TRUE, max.combine = 100), 
 .     errorHandling = "stop", packages = "doRNG", export = NULL, 
 .     noexport = NULL, options = list(), verbose = FALSE), class = "foreach") %dopar% 
 .     {
 .         {
 .             rngtools::RNGseed(.doRNG.stream)
 .         }
 .         {
 .             t1 <- Sys.time()
 .             hypParamSam <- unlist(hypParamSamNG[i])
 .             dt_modAdstocked <- dt_mod[, .SD, .SDcols = setdiff(names(dt_mod), 
 .                 "ds")]
 .             mediaAdstocked <- list()
 .             mediaVecCum <- list()
 .             mediaSaturated <- list()
 .             for (v in 1:length(all_media)) {
 .                 adstock <- check_adstock(adstock)
 .                 m <- dt_modAdstocked[, get(all_media[v])]
 .                 if (adstock == "geometric") {
 .                   theta <- hypParamSam[paste0(all_media[v], "_thetas")]
 .                   x_list <- adstock_geometric(x = m, theta = theta)
 .                 }
 .                 else if (adstock == "weibull_cdf") {
 .                   shape <- hypParamSam[paste0(all_media[v], "_shapes")]
 .                   scale <- hypParamSam[paste0(all_media[v], "_scales")]
 .                   x_list <- adstock_weibull(x = m, shape = shape, 
 .                     scale = scale, windlen = rollingWindowLength, 
 .                     type = "cdf")
 .                 }
 .                 else if (adstock == "weibull_pdf") {
 .                   shape <- hypParamSam[paste0(all_media[v], "_shapes")]
 .                   scale <- hypParamSam[paste0(all_media[v], "_scales")]
 .                   x_list <- adstock_weibull(x = m, shape = shape, 
 .                     scale = scale, windlen = rollingWindowLength, 
 .                     type = "pdf")
 .                 }
 .                 m_adstocked <- x_list$x_decayed
 .                 mediaAdstocked[[v]] <- m_adstocked
 .                 mediaVecCum[[v]] <- x_list$thetaVecCum
 .                 m_adstockedRollWind <- m_adstocked[rollingWindowStartWhich:rollingWindowEndWhich]
 .                 alpha <- hypParamSam[paste0(all_media[v], "_alphas")]
 .                 gamma <- hypParamSam[paste0(all_media[v], "_gammas")]
 .                 mediaSaturated[[v]] <- saturation_hill(m_adstockedRollWind, 
 .                   alpha = alpha, gamma = gamma)
 .             }
 .             names(mediaAdstocked) <- all_media
 .             dt_modAdstocked[, `:=`((all_media), mediaAdstocked)]
 .             dt_mediaVecCum <- data.table()[, `:=`((all_media), 
 .                 mediaVecCum)]
 .             names(mediaSaturated) <- all_media
 .             dt_modSaturated <- dt_modAdstocked[rollingWindowStartWhich:rollingWindowEndWhich]
 .             dt_modSaturated[, `:=`((all_media), mediaSaturated)]
 .             dt_train <- copy(dt_modSaturated)
 .             y_train <- dt_train$dep_var
 .             x_train <- model.matrix(dep_var ~ ., dt_train)[, 
 .                 -1]
 .             dt_sign <- dt_modSaturated[, !"dep_var"]
 .             x_sign <- c(prophet_signs, context_signs, paid_media_signs, 
 .                 organic_signs)
 .             names(x_sign) <- c(prophet_vars, context_vars, paid_media_spends, 
 .                 organic_vars)
 .             check_factor <- sapply(dt_sign, is.factor)
 .             lower.limits <- upper.limits <- c()
 .             for (s in 1:length(check_factor)) {
 .                 if (check_factor[s] == TRUE) {
 .                   level.n <- length(levels(unlist(dt_sign[, s, 
 .                     with = FALSE])))
 .                   if (level.n <= 1) {
 .                     stop("factor variables must have more than 1 level")
 .                   }
 .                   lower_vec <- if (x_sign[s] == "positive") {
 .                     rep(0, level.n - 1)
 .                   }
 .                   else {
 .                     rep(-Inf, level.n - 1)
 .                   }
 .                   upper_vec <- if (x_sign[s] == "negative") {
 .                     rep(0, level.n - 1)
 .                   }
 .                   else {
 .                     rep(Inf, level.n - 1)
 .                   }
 .                   lower.limits <- c(lower.limits, lower_vec)
 .                   upper.limits <- c(upper.limits, upper_vec)
 .                 }
 .                 else {
 .                   lower.limits <- c(lower.limits, ifelse(x_sign[s] == 
 .                     "positive", 0, -Inf))
 .                   upper.limits <- c(upper.limits, ifelse(x_sign[s] == 
 .                     "negative", 0, Inf))
 .                 }
 .             }
 .             lambda_hp <- unlist(hypParamSamNG$lambda[i])
 .             if (hyper_fixed == FALSE) {
 .                 lambda_scaled <- lambda_min + (lambda_max - lambda_min) * 
 .                   lambda_hp
 .             }
 .             else {
 .                 lambda_scaled <- lambda_hp
 .             }
 .             if (add_penalty_factor) {
 .                 penalty.factor <- unlist(as.data.frame(hypParamSamNG)[i, 
 .                   grepl("penalty_", names(hypParamSamNG))])
 .             }
 .             else {
 .                 penalty.factor <- rep(1, ncol(x_train))
 .             }
 .             glm_mod <- glmnet(x_train, y_train, family = "gaussian", 
 .                 alpha = 0, lambda = lambda_scaled, lower.limits = lower.limits, 
 .                 upper.limits = upper.limits, type.measure = "mse", 
 .                 penalty.factor = penalty.factor)
 .             mod_out <- model_refit(x_train, y_train, lambda = lambda_scaled, 
 .                 lower.limits, upper.limits, intercept_sign)
 .             decompCollect <- model_decomp(coefs = mod_out$coefs, 
 .                 dt_modSaturated = dt_modSaturated, x = x_train, 
 .                 y_pred = mod_out$y_pred, i = i, dt_modRollWind = dt_modRollWind, 
 .                 refreshAddedStart = refreshAddedStart)
 .             nrmse <- mod_out$nrmse_train
 .             mape <- 0
 .             df.int <- mod_out$df.int
 .             if (!is.null(calibration_input)) {
 .                 liftCollect <- calibrate_mmm(decompCollect = decompCollect, 
 .                   calibration_input = calibration_input, paid_media_spends = paid_media_spends, 
 .                   dayInterval = InputCollect$dayInterval)
 .                 mape <- liftCollect[, mean(mape_lift)]
 .             }
 .             dt_decompSpendDist <- decompCollect$xDecompAgg[rn %in% 
 .                 paid_media_spends, .(rn, xDecompAgg, xDecompPerc, 
 .                 xDecompMeanNon0Perc, xDecompMeanNon0, xDecompPercRF, 
 .                 xDecompMeanNon0PercRF, xDecompMeanNon0RF)]
 .             dt_decompSpendDist <- dt_decompSpendDist[dt_spendShare[, 
 .                 .(rn, spend_share, spend_share_refresh, mean_spend, 
 .                   total_spend)], on = "rn"]
 .             dt_decompSpendDist[, `:=`(effect_share = xDecompPerc/sum(xDecompPerc), 
 .                 effect_share_refresh = xDecompPercRF/sum(xDecompPercRF))]
 .             decompCollect$xDecompAgg[dt_decompSpendDist[, .(rn, 
 .                 spend_share_refresh, effect_share_refresh)], 
 .                 `:=`(spend_share_refresh = i.spend_share_refresh, 
 .                   effect_share_refresh = i.effect_share_refresh), 
 .                 on = "rn"]
 .             if (!refresh) {
 .                 decomp.rssd <- dt_decompSpendDist[, sqrt(sum((effect_share - 
 .                   spend_share)^2))]
 .             }
 .             else {
 .                 dt_decompRF <- decompCollect$xDecompAgg[, .(rn, 
 .                   decomp_perc = xDecompPerc)][xDecompAggPrev[, 
 .                   .(rn, decomp_perc_prev = xDecompPerc)], on = "rn"]
 .                 decomp.rssd.nonmedia <- dt_decompRF[!(rn %in% 
 .                   paid_media_spends), sqrt(mean((decomp_perc - 
 .                   decomp_perc_prev)^2))]
 .                 decomp.rssd.media <- dt_decompSpendDist[, sqrt(mean((effect_share_refresh - 
 .                   spend_share_refresh)^2))]
 .                 decomp.rssd <- decomp.rssd.media + decomp.rssd.nonmedia/(1 - 
 .                   refresh_steps/rollingWindowLength)
 .             }
 .             if (is.nan(decomp.rssd)) {
 .                 decomp.rssd <- Inf
 .                 dt_decompSpendDist[, `:=`(effect_share, 0)]
 .             }
 .             resultHypParam <- data.table()[, `:=`((hypParamSamName), 
 .                 lapply(hypParamSam[1:length(hypParamSamName)], 
 .                   function(x) x))]
 .             resultCollect <- list(resultHypParam = resultHypParam[, 
 .                 `:=`(mape = mape, nrmse = nrmse, decomp.rssd = decomp.rssd, 
 .                   rsq_train = mod_out$rsq_train, lambda = lambda_scaled, 
 .                   lambda_hp = lambda_hp, lambda_max = lambda_max, 
 .                   pos = prod(decompCollect$xDecompAgg$pos), Elapsed = as.numeric(difftime(Sys.time(), 
 .                     t1, units = "secs")), ElapsedAccum = as.numeric(difftime(Sys.time(), 
 .                     t0, units = "secs")), iterPar = i, iterNG = lng, 
 .                   df.int = df.int)], xDecompVec = if (hyper_fixed) {
 .                 decompCollect$xDecompVec[, `:=`(intercept = decompCollect$xDecompAgg[rn == 
 .                   "(Intercept)", xDecompAgg], mape = mape, nrmse = nrmse, 
 .                   decomp.rssd = decomp.rssd, rsq_train = mod_out$rsq_train, 
 .                   lambda = lambda_scaled, lambda_hp = lambda_hp, 
 .                   lambda_max = lambda_max, iterPar = i, iterNG = lng, 
 .                   df.int = df.int)]
 .             } else {
 .                 NULL
 .             }, xDecompAgg = decompCollect$xDecompAgg[, `:=`(mape = mape, 
 .                 nrmse = nrmse, decomp.rssd = decomp.rssd, rsq_train = mod_out$rsq_train, 
 .                 lambda = lambda_scaled, lambda_hp = lambda_hp, 
 .                 lambda_max = lambda_max, iterPar = i, iterNG = lng, 
 .                 df.int = df.int)], liftCalibration = if (!is.null(calibration_input)) {
 .                 liftCollect[, `:=`(mape = mape, nrmse = nrmse, 
 .                   decomp.rssd = decomp.rssd, rsq_train = mod_out$rsq_train, 
 .                   lambda = lambda_scaled, lambda_hp = lambda_hp, 
 .                   lambda_max = lambda_max, iterPar = i, iterNG = lng)]
 .             } else {
 .                 NULL
 .             }, decompSpendDist = dt_decompSpendDist[, `:=`(mape = mape, 
 .                 nrmse = nrmse, decomp.rssd = decomp.rssd, rsq_train = mod_out$rsq_train, 
 .                 lambda = lambda_scaled, lambda_hp = lambda_hp, 
 .                 lambda_max = lambda_max, iterPar = i, iterNG = lng, 
 .                 df.int = df.int)], mape.lift = mape, nrmse = nrmse, 
 .                 decomp.rssd = decomp.rssd, lambda = lambda_scaled, 
 .                 lambda_hp = lambda_hp, lambda_max = lambda_max, 
 .                 lambda_min_ratio = lambda_min_ratio, iterPar = i, 
 .                 iterNG = lng, df.int = df.int)
 .             best_mape <- min(best_mape, mape)
 .             if (cnt == iterTotal) {
 .                 print(" === ")
 .                 print(paste0("Optimizer_name: ", optimizer_name, 
 .                   ";  Total_iterations: ", cnt, ";   best_mape: ", 
 .                   best_mape))
 .             }
 .             return(resultCollect)
 .         }
 .     }
10. e$fun(obj, substitute(ex), parent.frame(), e$data)

Warning message in foreach(i = 1:iterPar) %dorng% {:
“Foreach loop (doParallelMC) had changed the current RNG type: RNG was restored to same type, next state”
Timing stopped at: 0.192 0.004 0.196

Provide dummy data & model configuration

I am trying to do robyn_run with data from demo example and my configuration is absolutely the same as in demo.R except for adstock (and hyperparameters list was changed to match weibull parameters):

data("dt_simulated_weekly")
data("dt_prophet_holidays")

InputCollect <- robyn_inputs(
  dt_input = dt_simulated_weekly
  ,dt_holidays = dt_prophet_holidays
  ,date_var = "DATE" # date format must be "2020-01-01"
  ,dep_var = "revenue" # there should be only one dependent variable
  ,dep_var_type = "revenue" # "revenue" or "conversion"
  ,prophet_vars = c("trend", "season", "holiday") # "trend","season", "weekday" & "holiday"
  ,prophet_country = "DE"# input one country. dt_prophet_holidays includes 59 countries by default
  ,context_vars = c("competitor_sales_B", "events") # e.g. competitors, discount, unemployment etc
  ,paid_media_spends = c("tv_S","ooh_S",	"print_S"	,"facebook_S", "search_S") # mandatory input
  ,paid_media_vars = c("tv_S", "ooh_S"	,	"print_S"	,"facebook_I" ,"search_clicks_P") # mandatory.
  # paid_media_vars must have same order as paid_media_spends. Use media exposure metrics like
  # impressions, GRP etc. If not applicable, use spend instead.
  ,organic_vars = c("newsletter") # marketing activity without media spend
  ,factor_vars = c("events") # specify which variables in context_vars or organic_vars are factorial
  ,window_start = "2016-11-23"
  ,window_end = "2018-08-22"
  ,adstock = "weibull_cdf" # geometric, weibull_cdf or weibull_pdf.
)

hyperparameters <- list(
    facebook_S_alphas = c(0.5, 3)
    ,facebook_S_gammas = c(0.3, 1)
    ,facebook_S_shapes = c(0.0001, 2)
    ,facebook_S_scales = c(0, 0.1)

    ,print_S_alphas = c(0.5, 3)
    ,print_S_gammas = c(0.3, 1)
    ,print_S_shapes = c(0.0001, 2)
    ,print_S_scales = c(0, 0.1)

    ,tv_S_alphas = c(0.5, 3)
    ,tv_S_gammas = c(0.3, 1)
    ,tv_S_shapes = c(0.0001, 2)
    ,tv_S_scales = c(0, 0.1)

    ,search_S_alphas = c(0.5, 3)
    ,search_S_gammas = c(0.3, 1)
    ,search_S_shapes = c(0.0001, 2)
    ,search_S_scales = c(0, 0.1)

    ,ooh_S_alphas = c(0.5, 3)
    ,ooh_S_gammas = c(0.3, 1)
    ,ooh_S_shapes = c(0.0001, 2)
    ,ooh_S_scales = c(0, 0.1)
    
    ,newsletter_alphas = c(0.5, 3)
    ,newsletter_gammas = c(0.3, 1)
    ,newsletter_shapes = c(0.0001, 2)
    ,newsletter_scales = c(0, 0.1)
)

InputCollect <- robyn_inputs(InputCollect = InputCollect, hyperparameters = hyperparameters)
print(InputCollect)

OutputModels <- robyn_run(
  InputCollect = InputCollect # feed in all model specification
  , cores = 1 # default
  , iterations = 2000 # recommended for the dummy dataset
  , trials = 5 # recommended for the dummy dataset
  , outputs = FALSE # outputs = FALSE disables direct model output
)
print(OutputModels)

Environment & Robyn version

packageVersion("Robyn") returns 3.6.1

sessionInfo() returns:

R version 4.1.3 (2022-03-10)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 20.04.4 LTS

Matrix products: default
BLAS:   /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.9.0
LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.9.0

locale:
 [1] LC_CTYPE=C.UTF-8    LC_NUMERIC=C        LC_TIME=C          
 [4] LC_COLLATE=C        LC_MONETARY=C       LC_MESSAGES=C      
 [7] LC_PAPER=C          LC_NAME=C           LC_ADDRESS=C       
[10] LC_TELEPHONE=C      LC_MEASUREMENT=C    LC_IDENTIFICATION=C

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] doRNG_1.8.2     rngtools_1.5.2  foreach_1.5.2   reticulate_1.24
[5] Robyn_3.6.1    

loaded via a namespace (and not attached):
 [1] bitops_1.0-7         matrixStats_0.61.0   lubridate_1.8.0     
 [4] doParallel_1.0.17    httr_1.4.2           rprojroot_2.0.2     
 [7] rstan_2.21.3         repr_1.1.4           tools_4.1.3         
[10] utf8_1.2.2           R6_2.5.1             rpart_4.1.16        
[13] lazyeval_0.2.2       colorspace_2.0-3     tidyselect_1.1.2    
[16] gridExtra_2.3        prettyunits_1.1.1    processx_3.5.2      
[19] compiler_4.1.3       glmnet_4.1-3         cli_3.2.0           
[22] rvest_1.0.2          xml2_1.3.3           scales_1.1.1        
[25] ggridges_0.5.3       callr_3.7.0          rappdirs_0.3.3      
[28] pbdZMQ_0.3-7         stringr_1.4.0        digest_0.6.29       
[31] StanHeaders_2.21.0-7 extraDistr_1.9.1     base64enc_0.1-3     
[34] pkgconfig_2.0.3      htmltools_0.5.2      fastmap_1.1.0       
[37] rlang_1.0.2          rPref_1.3            shape_1.4.6         
[40] prophet_1.0          generics_0.1.2       jsonlite_1.8.0      
[43] dplyr_1.0.8          zip_2.2.0            inline_0.3.19       
[46] RCurl_1.98-1.6       magrittr_2.0.2       loo_2.5.0           
[49] patchwork_1.1.1      Matrix_1.4-0         Rcpp_1.0.8.3        
[52] IRkernel_1.3.0.9000  munsell_0.5.0        fansi_1.0.2         
[55] lifecycle_1.0.1      stringi_1.7.6        pROC_1.18.0         
[58] yaml_2.3.5           pkgbuild_1.3.1       plyr_1.8.6          
[61] grid_4.1.3           parallel_4.1.3       crayon_1.5.0        
[64] lattice_0.20-45      IRdisplay_1.1        splines_4.1.3       
[67] lares_5.1.0          ps_1.6.0             pillar_1.7.0        
[70] igraph_1.2.11        uuid_1.0-4           codetools_0.2-18    
[73] stats4_4.1.3         glue_1.6.2           evaluate_0.15       
[76] rpart.plot_3.1.0     data.table_1.14.2    RcppParallel_5.1.5  
[79] vctrs_0.3.8          png_0.1-7            nloptr_2.0.0        
[82] gtable_0.3.0         purrr_0.3.4          tidyr_1.2.0         
[85] ggplot2_3.3.5        openxlsx_4.2.5       h2o_3.36.0.3        
[88] survival_3.2-13      minpack.lm_1.2-1     tibble_3.1.6        
[91] iterators_1.0.14     ellipsis_0.3.2       here_1.0.1          
@kyletgoldberg
Copy link
Contributor

Hey there - thanks for flagging this issue, I was able to replicate this and created a branch with a fix for the bug. Could you test the branch on your end and make sure you are getting the expected results and everything is working for you? Thank you! https://github.com/facebookexperimental/Robyn/tree/weibull_testfix

@laresbernardo
Copy link
Collaborator

FYI: to install that version, simply run remotes::install_github("facebookexperimental/Robyn/R@weibull_testfix"), restart your session and re-run the code. Thanks for testing @NikolayLutsyak

@NikolayLutsyak
Copy link
Author

Thank you for so fast hotfix! I have tested weibull_testfix branch, training with weibull adstock starts and seems to work, but I am getting enormous amount of warnings now:

Warning message in foreach(i = 1:iterPar) %dorng% {:
“Foreach loop (doParallelMC) had changed the current RNG type: RNG was restored to same type, next state”

Is it ok and is it possible to suppress them somehow?

@laresbernardo
Copy link
Collaborator

Can you check if you have doRNG (>= 1.8.2)? Run packageVersion("doRNG")

Based on this ticket

@NikolayLutsyak
Copy link
Author

Yes, packageVersion("doRNG") return 1.8.2
What is interesting this warning appears only when cores = 1 in robyn_run, for cores > 1 everything is OK.

@NikolayLutsyak
Copy link
Author

NikolayLutsyak commented Mar 23, 2022

And I still got error Error in {: task 1 failed - "'from' must be a finite number" but now it is in new place.

robyn_run works correctly

OutputModels <- robyn_run(
  InputCollect = InputCollect # feed in all model specification
  , cores = 20 # default
  , iterations = 2000 # recommended for the dummy dataset
  , trials = 1 # recommended for the dummy dataset
  , outputs = FALSE # outputs = FALSE disables direct model output
)

after that I am trying to run robyn_outputs:

OutputCollect <- robyn_outputs(
  InputCollect, OutputModels
  , pareto_fronts = 3
  , csv_out = "pareto" 
  , clusters = TRUE 
  , plot_pareto = TRUE
  , plot_folder = robyn_object
)

and got such stack trace:

Error in {: task 1 failed - "'from' must be a finite number"
Traceback:

1. robyn_outputs(InputCollect, OutputModels, pareto_fronts = 3, 
 .     csv_out = "pareto", clusters = TRUE, plot_pareto = TRUE, 
 .     plot_folder = robyn_object)
2. robyn_pareto(InputCollect, OutputModels, pareto_fronts, calibration_constraint)
3. foreach(respN = seq_along(decompSpendDistPar$rn), .combine = rbind) %dorng% 
 .     {
 .         get_resp <- robyn_response(media_metric = decompSpendDistPar$rn[respN], 
 .             select_model = decompSpendDistPar$solID[respN], metric_value = decompSpendDistPar$mean_spend[respN], 
 .             dt_hyppar = resultHypParamPar, dt_coef = xDecompAggPar, 
 .             InputCollect = InputCollect, OutputCollect = OutputModels)$response
 .         dt_resp <- data.table(mean_response = get_resp, rn = decompSpendDistPar$rn[respN], 
 .             solID = decompSpendDistPar$solID[respN])
 .         return(dt_resp)
 .     }
4. do.call("%dopar%", list(obj, ex), envir = parent.frame())
5. structure(list(args = seq_along(decompSpendDistPar$rn)(.doRNG.stream = list(
 .     c(10407L, -1053767131L, -126173022L, -155812933L, 1344574400L, 
 .     -1668607807L, 2079799182L), c(10407L, 1823119202L, 1086938690L, 
 .     -1979133974L, -772903951L, -160941931L, 1697990691L), c(10407L, 
 .     -214602518L, -1819828941L, -1261338095L, 2120158377L, -904661020L, 
 .     -1719827412L), c(10407L, -2127710281L, -1506399862L, 1922222791L, 
 .     2075338323L, 1152645175L, -294540215L), c(10407L, -381162302L, 
 .     -171568343L, 2082427467L, -1183117886L, -1330589025L, 790224011L
.........................................................................................................................................
 .     34194177L, -629284681L, -271792460L, 1610391965L, 450799667L
 .     ), c(10407L, -1298508154L, -96438104L, -857743339L, 688467565L, 
 .     -135254522L, 391531769L))), argnames = c("respN", ".doRNG.stream"
 . ), evalenv = <environment>, specified = character(0), combineInfo = list(
 .     fun = function (..., deparse.level = 1) 
 .     .Internal(rbind(deparse.level, ...)), in.order = TRUE, has.init = FALSE, 
 .     init = NULL, final = NULL, multi.combine = TRUE, max.combine = 100), 
 .     errorHandling = "stop", packages = "doRNG", export = NULL, 
 .     noexport = NULL, options = list(), verbose = FALSE), class = "foreach") %dopar% 
 .     {
 .         {
 .             rngtools::RNGseed(.doRNG.stream)
 .         }
 .         {
 .             get_resp <- robyn_response(media_metric = decompSpendDistPar$rn[respN], 
 .                 select_model = decompSpendDistPar$solID[respN], 
 .                 metric_value = decompSpendDistPar$mean_spend[respN], 
 .                 dt_hyppar = resultHypParamPar, dt_coef = xDecompAggPar, 
 .                 InputCollect = InputCollect, OutputCollect = OutputModels)$response
 .             dt_resp <- data.table(mean_response = get_resp, rn = decompSpendDistPar$rn[respN], 
 .                 solID = decompSpendDistPar$solID[respN])
 .             return(dt_resp)
 .         }
 .     }
6. e$fun(obj, substitute(ex), parent.frame(), e$data)

It seems to me that problem could be in such strings:

./R/R/pareto.R:191:          x_list <- adstock_weibull(x = m, shape = shape, scale = scale, windlen = InputCollect$rollingWindowLength, type = "cdf")
./R/R/pareto.R:195:          x_list <- adstock_weibull(x = m, shape = shape, scale = scale, windlen = InputCollect$rollingWindowLength, type = "pdf")
./R/R/model.R:1081:    x_list <- adstock_weibull(x = media_vec, shape = shape, scale = scale, windlen = InputCollect$rollingWindowLength, type = "cdf")
./R/R/model.R:1085:    x_list <- adstock_weibull(x = media_vec, shape = shape, scale = scale, windlen = InputCollect$rollingWindowLength, type = "pdf")

I removed windlen = InputCollect$rollingWindowLength, from this files and tested locally, everything works fine (but I don't know whether it can break some logic of calculations). I didn't manage to push my changes to repo also

@kyletgoldberg
Copy link
Contributor

Thanks for taking a look at that - that was the problematic condition in the original part, so it should be okay to remove in the rest. I'll work on updating it today.

@kyletgoldberg
Copy link
Contributor

Just pushed the fix into main, thanks again for flagging this and working with us to get it solved! #355

@jsatani-tonal
Copy link

@kyletgoldberg if this is in main do I need to run this
remotes::install_github("facebookexperimental/Robyn/R@weibull_testfix") ?
I'm getting the same error
Error in { : task 1 failed - "'from' must be a finite number"

@mast4461
Copy link
Contributor

@jsatani-tonal, R couldn't execute remotes::install_github("facebookexperimental/Robyn/R@weibull_testfix") for me ("URL not found"), but after running remotes::install_github("facebookexperimental/Robyn/R"), i.e. reinstalling Robyn, I was able to run demo.R with the weibull_cdf adstock function (with an appropriate set of hyperparameters).

@laresbernardo
Copy link
Collaborator

Yes! Makes sense. That’s because that was the branch were we fixed this issue, but now it’s been merged into master. So thanks for confirming @mast4461

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants