Standardizing parameters (*i.e.*, coefficients) can allow for their comparison within and between models, variables and studies. Moreover, as it returns coefficients expressed in terms of **change of variance** (for instance, coefficients expressed in terms of SD of the response variable), it can allow for the usage of effect size interpretation guidelines, such as the famous Cohen’s (1988) rules of thumb.

However, standardizing the model’s parameters should *not* be automatically and mindlessly done: for some research fields, particular variables or types of studies (*e.g.*, replications), it sometimes makes more sense to keep, use and interpret the original parameters, especially if they are well known or easily understood.

Critically, **parameters standardization is not a trivial process**. Different techniques exist, that can lead to drastically different results. Thus, it is critical that the standardization method is explicitly documented and detailed.

** parameters include different techniques of parameters standardization**, described below (Bring 1994; Menard 2004, 2011; Gelman 2008; Schielzeth 2010).

```
library(effectsize)
library(dplyr)
lm(Sepal.Length ~ Petal.Length, data = iris) %>%
standardize_parameters()
```

```
> Parameter | Coefficient (std.) | 95% CI
> -------------------------------------------------
> (Intercept) | -5.03e-16 | [-0.08, 0.08]
> Petal.Length | 0.87 | [ 0.79, 0.95]
```

Standardizing the coefficient of this simple linear regression gives a value of `0.87`

, but did you know that for a simple regression this is actually the **same as a correlation**? Thus, you can eventually apply some (*in*)famous interpretation guidelines (e.g., Cohen’s rules of thumb).

```
> Parameter1 | Parameter2 | r | t | df | p | 95% CI | Method
> --------------------------------------------------------------------------------------------
> iris$Sepal.Length | iris$Petal.Length | 0.87 | 21.65 | 148 | < .001 | [0.83, 0.91] | Pearson
```

What happens in the case of **multiple continuous variables**? As in each effect in a regression model is “adjusted” for the other ones, we might expect coefficients to be somewhat alike to **partial correlations**. Let’s first start by computing the partial correlation between **Sepal.Length** and 3 other remaining variables.

```
df <- iris[, 1:4] # Remove the Species factor
correlation::correlation(df, partial = TRUE)[1:3, 1:3] # Select the rows of interest
```

```
> Parameter1 | Parameter2 | r
> -----------------------------------
> Sepal.Length | Sepal.Width | 0.63
> Sepal.Length | Petal.Length | 0.72
> Sepal.Length | Petal.Width | -0.34
```

Now, let’s apply another method to obtain effect sizes for frequentist regressions, based on the statistic values. We will convert the *t*-value (and its degrees of freedom, *df*) into a partial correlation coefficient *r*.

```
model <- lm(Sepal.Length ~ ., data = df)
params <- model_parameters(model)
t_to_r(params$t[2:4], params$df_error[2:4])
```

```
> r | 95% CI
> ----------------------
> 0.63 | [ 0.53, 0.70]
> 0.72 | [ 0.64, 0.78]
> -0.34 | [-0.47, -0.19]
```

Wow, the retrieved correlations coefficients from the regression model are **exactly** the same as the partial correlations!

However, note that in multiple regression standardizing the parameters in not quite the same as computing the (partial) correlation, due to… math :([1]

```
> Parameter | Coefficient (std.) | 95% CI
> --------------------------------------------------
> (Intercept) | -7.12e-17 | [-0.06, 0.06]
> Sepal.Width | 0.34 | [ 0.27, 0.41]
> Petal.Length | 1.51 | [ 1.27, 1.75]
> Petal.Width | -0.51 | [-0.74, -0.28]
```

How does it work in the case of differences, when **factors** are entered and differences between a given level and a reference level (the intercept)? You might have heard that it is similar to a **Cohen’s d**. Well, let’s see.

```
# Select portion of data containing the two levels of interest
data <- iris[iris$Species %in% c("setosa", "versicolor"), ]
lm(Sepal.Length ~ Species, data = data) %>%
standardize_parameters()
```

```
> Parameter | Coefficient (std.) | 95% CI
> -------------------------------------------------------
> (Intercept) | -0.72 | [-0.92, -0.53]
> Speciesversicolor | 1.45 | [ 1.18, 1.72]
```

This linear model suggests that the *standardized* difference between the *versicolor* level of Species and the *setosa* level (the reference level - the intercept) is of 1.12 standard deviation of `Sepal.Length`

(because the response variable was standardized, right?). Let’s compute the **Cohen’s d** between these two levels:

```
> Cohen's d | 95% CI
> --------------------------
> -2.10 | [-2.59, -1.61]
```

* It is very different!* Why? How? Both differences should be expressed in units of SD! But which SDs? Different SDs!

When looking at the difference between groups as a **slope**, the standardized parameter is the difference between the means in (SD_{Sepal.Length}). That is, the *slope* between `setosa`

and `versicolor`

is a change of 1.45 (SD_{Sepal.Length}).

However, when looking a the difference as a distance between two populations, Cohen’s d is the distance between the means in units of **pooled SDs**. That is, the *distance* between `setosa`

and `versicolor`

is of 2.1 SDs of each of the groups (here assumed to be equal).

Note that you can get a proximity of Cohen’s d with by converting the (t) statistic from the regression model via `t_to_d()`

:

```
> Parameter | Coefficient | SE | 95% CI | t | df | p
> ------------------------------------------------------------------------------
> (Intercept) | 5.01 | 0.06 | [4.88, 5.13] | 80.09 | 98 | < .001
> Species [versicolor] | 0.93 | 0.09 | [0.75, 1.11] | 10.52 | 98 | < .001
```

```
> d | 95% CI
> -------------------
> 2.13 | [1.63, 2.62]
```

It is also interesting to note that using the *smart* method when standardizing parameters will give you indices equivalent to **Glass’ delta**, which difference is expressed in terms of SD of the intercept (the “reference” factor levels).

```
> Parameter | Coefficient (std.) | 95% CI
> -----------------------------------------------------
> (Intercept) | 0.00 | [0.00, 0.00]
> Speciesversicolor | 2.64 | [2.15, 3.13]
```

```
glass_delta(data$Sepal.Length[data$Species=="versicolor"],
data$Sepal.Length[data$Species=="setosa"])
```

```
> Glass' delta | 95% CI
> ---------------------------
> 2.64 | [2.10, 3.17]
```

**… So note that some standardized differences are difference than others! :)**

_{To be added…}

Bring, Johan. 1994. “How to Standardize Regression Coefficients.” *The American Statistician* 48 (3): 209–13.

Gelman, Andrew. 2008. “Scaling Regression Inputs by Dividing by Two Standard Deviations.” *Statistics in Medicine* 27 (15): 2865–73.

Menard, Scott. 2004. “Six Approaches to Calculating Standardized Logistic Regression Coefficients.” *The American Statistician* 58 (3): 218–23.

———. 2011. “Standards for Standardized Logistic Regression Coefficients.” *Social Forces* 89 (4): 1409–28.

Schielzeth, Holger. 2010. “Simple Means to Improve the Interpretability of Regression Coefficients.” *Methods in Ecology and Evolution* 1 (2): 103–13.

- in fact, they are more closely related to the semi-partial correlations.