Model fit

Column

Assumption checks

Column

Indices of model fit

Metric Value
AIC 43.34
BIC 64.42
R2 0.62
R2 (adj.) 0.61
RMSE 0.27
Sigma 0.27

For interpretation of performance metrics, please refer to this documentation.

Parameter estimates

Column

Plot

Column

Tabular summary

Parameter Coefficient SE 95% CI t(144) p
(Intercept) -0.57 0.55 (-1.66, 0.53) -1.03 0.306
Sepal Length 0.80 0.11 (0.58, 1.02) 7.23 < .001
Species (versicolor) 1.44 0.71 (0.03, 2.85) 2.02 0.045
Species (virginica) 2.02 0.69 (0.66, 3.37) 2.94 0.004
Sepal Length * Species (versicolor) -0.48 0.13 (-0.74, -0.21) -3.58 < .001
Sepal Length * Species (virginica) -0.57 0.13 (-0.82, -0.32) -4.49 < .001

To find out more about table summary options, please refer to this documentation.

Predicted Values

Column

Plot

Error in match.arg(tolower(range), c("range", "iqr", "ci", "hdi", "eti", : 'arg' should be one of "range", "iqr", "ci", "hdi", "eti", "sd", "mad"
Error in lapply(text_modelbased, function(i) {: object 'text_modelbased' not found
Error in is.ggplot(x): object 'all_plots' not found

Column

Tabular summary

Error in eval(expr, envir, enclos): object 'text_modelbased' not found

Text reports

Column

Textual summary

We fitted a linear model (estimated using OLS) to predict Sepal.Width with Sepal.Length (formula: Sepal.Width ~ Sepal.Length * Species). The model explains a statistically significant and substantial proportion of variance (R2 = 0.62, F(5, 144) = 47.53, p < .001, adj. R2 = 0.61). The model’s intercept, corresponding to Sepal.Length = 0, is at -0.57 (95% CI (-1.66, 0.53), t(144) = -1.03, p = 0.306). Within this model:

  • The effect of Sepal Length is statistically significant and positive (beta = 0.80, 95% CI (0.58, 1.02), t(144) = 7.23, p < .001; Std. beta = 1.52, 95% CI (1.10, 1.93))
  • The effect of Species (versicolor) is statistically significant and positive (beta = 1.44, 95% CI (0.03, 2.85), t(144) = 2.02, p = 0.045; Std. beta = -3.11, 95% CI (-3.60, -2.62))
  • The effect of Species (virginica) is statistically significant and positive (beta = 2.02, 95% CI (0.66, 3.37), t(144) = 2.94, p = 0.004; Std. beta = -2.97, 95% CI (-3.50, -2.44))
  • The interaction effect of Species (versicolor) on Sepal Length is statistically significant and negative (beta = -0.48, 95% CI (-0.74, -0.21), t(144) = -3.58, p < .001; Std. beta = -0.91, 95% CI (-1.41, -0.41))
  • The interaction effect of Species (virginica) on Sepal Length is statistically significant and negative (beta = -0.57, 95% CI (-0.82, -0.32), t(144) = -4.49, p < .001; Std. beta = -1.08, 95% CI (-1.55, -0.60))

Standardized parameters were obtained by fitting the model on a standardized version of the dataset. 95% Confidence Intervals (CIs) and p-values were computed using a Wald t-distribution approximation. and We fitted a linear model (estimated using OLS) to predict Sepal.Width with Species (formula: Sepal.Width ~ Sepal.Length * Species). The model explains a statistically significant and substantial proportion of variance (R2 = 0.62, F(5, 144) = 47.53, p < .001, adj. R2 = 0.61). The model’s intercept, corresponding to Species = setosa, is at -0.57 (95% CI (-1.66, 0.53), t(144) = -1.03, p = 0.306). Within this model:

  • The effect of Sepal Length is statistically significant and positive (beta = 0.80, 95% CI (0.58, 1.02), t(144) = 7.23, p < .001; Std. beta = 1.52, 95% CI (1.10, 1.93))
  • The effect of Species (versicolor) is statistically significant and positive (beta = 1.44, 95% CI (0.03, 2.85), t(144) = 2.02, p = 0.045; Std. beta = -3.11, 95% CI (-3.60, -2.62))
  • The effect of Species (virginica) is statistically significant and positive (beta = 2.02, 95% CI (0.66, 3.37), t(144) = 2.94, p = 0.004; Std. beta = -2.97, 95% CI (-3.50, -2.44))
  • The interaction effect of Species (versicolor) on Sepal Length is statistically significant and negative (beta = -0.48, 95% CI (-0.74, -0.21), t(144) = -3.58, p < .001; Std. beta = -0.91, 95% CI (-1.41, -0.41))
  • The interaction effect of Species (virginica) on Sepal Length is statistically significant and negative (beta = -0.57, 95% CI (-0.82, -0.32), t(144) = -4.49, p < .001; Std. beta = -1.08, 95% CI (-1.55, -0.60))

Standardized parameters were obtained by fitting the model on a standardized version of the dataset. 95% Confidence Intervals (CIs) and p-values were computed using a Wald t-distribution approximation.

The model explains a statistically significant and substantial proportion of variance (R2 = 0.62, F(5, 144) = 47.53, p < .001, adj. R2 = 0.61)

Column

Model information

---
title: "Regression model summary from `{easystats}`"
output: 
  flexdashboard::flex_dashboard:
    theme:
      version: 4
      # bg: "#101010"
      # fg: "#FDF7F7" 
      primary: "#0054AD"
      base_font:
        google: Prompt
      code_font:
        google: JetBrains Mono
params:
  model: model
  check_model_args: check_model_args
  parameters_args: parameters_args
  performance_args: performance_args
---

```{r setup, include=FALSE}
library(flexdashboard)
library(easystats)

# Since not all regression model are supported across all packages, make the
# dashboard chunks more fault-tolerant. E.g. a model might be supported in
# `{parameters}`, but not in `{report}`.
#
# For this reason, `error = TRUE`
knitr::opts_chunk$set(
  error = TRUE,
  out.width = "100%"
)
```

```{r}
# Get user-specified model data
model <- params$model

# Is it supported by `{easystats}`? Skip evaluation of the following chunks if not.
is_supported <- insight::is_model_supported(model)

if (!is_supported) {
  unsupported_message <- sprintf(
    "Unfortunately, objects of class '%s' are not yet supported in {easystats}.\n
    For a list of supported models, see `insight::supported_models()`.",
    class(model)[1]
  )
}
```


Model fit 
=====================================  

Column {data-width=700}
-----------------------------------------------------------------------

### Assumption checks

```{r check-model, eval=is_supported, fig.height=10, fig.width=10}
check_model_args <- c(list(model), params$check_model_args)
do.call(performance::check_model, check_model_args)
```

```{r, eval=!is_supported}
cat(unsupported_message)
```

Column {data-width=300}
-----------------------------------------------------------------------

### Indices of model fit

```{r, eval=is_supported}
# `{performance}`
performance_args <- c(list(model), params$performance_args)
table_performance <- do.call(performance::performance, performance_args)
print_md(table_performance, layout = "vertical", caption = NULL)
```


```{r, eval=!is_supported}
cat(unsupported_message)
```

For interpretation of performance metrics, please refer to <a href="https://easystats.github.io/performance/reference/model_performance.html" target="_blank">this documentation</a>.

Parameter estimates
=====================================  

Column {data-width=550}
-----------------------------------------------------------------------

### Plot

```{r dot-whisker, eval=is_supported}
# `{parameters}`
parameters_args <- c(list(model), params$parameters_args)
table_parameters <- do.call(parameters::parameters, parameters_args)

plot(table_parameters)
```


```{r, eval=!is_supported}
cat(unsupported_message)
```

Column {data-width=450}
-----------------------------------------------------------------------

### Tabular summary

```{r, eval=is_supported}
print_md(table_parameters, caption = NULL)
```


```{r, eval=!is_supported}
cat(unsupported_message)
```

To find out more about table summary options, please refer to <a href="https://easystats.github.io/parameters/reference/model_parameters.html" target="_blank">this documentation</a>.


Predicted Values
=====================================  

Column {data-width=600}
-----------------------------------------------------------------------

### Plot

```{r expected-values, eval=is_supported, fig.height=10, fig.width=10}
# `{modelbased}`
int_terms <- find_interactions(model, component = "conditional", flatten = TRUE)
con_terms <- find_variables(model)$conditional

if (is.null(int_terms)) {
  model_terms <- con_terms
} else {
  model_terms <- clean_names(int_terms)
  int_terms <- unique(unlist(strsplit(clean_names(int_terms), ":", fixed = TRUE)))
  model_terms <- c(model_terms, setdiff(con_terms, int_terms))
}

text_modelbased <- lapply(unique(model_terms), function(i) {
  grid <- get_datagrid(model, at = i, range = "grid", preserve_range = FALSE)
  estimate_expectation(model, data = grid)
})

ggplot2::theme_set(theme_modern())
# all_plots <- lapply(text_modelbased, function(i) {
#   out <- do.call(visualisation_recipe, c(list(i), modelbased_args))
#   plot(out) + ggplot2::ggtitle("")
# })
all_plots <- lapply(text_modelbased, function(i) {
  out <- visualisation_recipe(i, show_data = "none")
  plot(out) + ggplot2::ggtitle("")
})

see::plots(all_plots, n_columns = round(sqrt(length(text_modelbased))))
```


```{r, eval=!is_supported}
cat(unsupported_message)
```

Column {data-width=400}
-----------------------------------------------------------------------

### Tabular summary

```{r, eval=is_supported, results="asis"}
for (i in text_modelbased) {
  tmp <- print_md(i)
  tmp <- gsub("Variable predicted", "\nVariable predicted", tmp)
  tmp <- gsub("Predictors modulated", "\nPredictors modulated", tmp)
  tmp <- gsub("Predictors controlled", "\nPredictors controlled", tmp)
  print(tmp)
}
```


```{r, eval=!is_supported}
cat(unsupported_message)
```


Text reports
=====================================    

Column {data-width=500}
-----------------------------------------------------------------------

### Textual summary

```{r, eval=is_supported, results='asis', collapse=TRUE}
# `{report}`
text_report <- report(model)
text_report_performance <- report_performance(model)

gsub("]", ")", gsub("[", "(", text_report, fixed = TRUE), fixed = TRUE)
cat("\n")
gsub("]", ")", gsub("[", "(", text_report_performance, fixed = TRUE), fixed = TRUE)
```


```{r, eval=!is_supported}
cat(unsupported_message)
```

Column {data-width=500}
-----------------------------------------------------------------------

### Model information

```{r, eval=is_supported}
model_info_data <- insight::model_info(model)

model_info_data <- datawizard::data_to_long(as.data.frame(model_info_data))

DT::datatable(model_info_data)
```

```{r, eval=!is_supported}
cat(unsupported_message)
```