First, we load the required packages and sample data. We want to plot several items from an index, where all these items are 4-point Likert scales. The items deal with coping of family caregivers, i.e. how well they cope with their role of caring for an older relative.

To find the required variables in the data set, we search all variables for a specific pattern. We know that the variables from the COPE-index all have a **cop** in their variable name. We can easily search for variables in a data set with the find_var()-function.

library(sjPlot) library(sjmisc) data(efc) # find all variables from COPE-Index, which all # have a "cop" in their variable name, and then # plot the items as likert-plot mydf <- find_var(efc, pattern = "cop", out = "df") plot_likert(mydf)

The plot is not perfect, because for those values with just a few answers, we have overlapping values. However, there are quite some options to tweak the plot. For instance, we can increase the axis-range (`grid.range`

), show cumulative percentage-values only at the ende of the bars (`values = "sum.outside"`

) and show the percentage-sign (`show.prc.sign = TRUE`

).

plot_likert( mydf, grid.range = c(1.2, 1.4), expand.grid = FALSE, values = "sum.outside", show.prc.sign = TRUE )

The interesting question is, whether we can reduce the dimensions of this scale and try to extract principle components, in order to group single items into different sub-scales. To do that, we first run a PCA on the data. This can be done, e.g., with sjt.pca() or sjp.pca().

# creates a HTML-table of the results of an PCA. sjt.pca(mydf)

Component 1 | Component 2 | |
---|---|---|

do you feel you cope well as caregiver? | 0.29 | 0.60 |

do you find caregiving too demanding? | -0.60 | -0.42 |

does caregiving cause difficulties in your relationship with your friends? | -0.69 | -0.16 |

does caregiving have negative effect on your physical health? | -0.73 | -0.12 |

does caregiving cause difficulties in your relationship with your family? | -0.64 | -0.01 |

does caregiving cause financial difficulties? | -0.69 | 0.12 |

do you feel trapped in your role as caregiver? | -0.68 | -0.38 |

do you feel supported by friends/neighbours? | -0.07 | 0.64 |

do you feel caregiving worthwhile? | 0.07 | 0.75 |

Cronbach’s α | 0.78 | 0.45 |

varimax-rotation |

As we can see, six items are associated with component one, while three items mainly load on the second component. The indices that indicate which items is associated with which component is returned by the function in the element `$factor.index`

. So we save this in an object that can be used to create a grouped Likert-plot.

groups <- sjt.pca(mydf)$factor.index plot_likert(mydf, groups = groups, values = "sum.outside")

There are even mote options to tweak the Likert-plots. Find the full documentation at

https://strengejacke.github.io/sjPlot/index.html.

`view_df()`

function from the sjPlot-package creates nice „codeplans“ from your data sets, and also supports labelled data and tagged NA-values. This gives you a comprehensive, yet clear overview of your data set.
To demonstrate this function, we use a (labelled) data set from the European Social Survey. `view_df()`

produces a HTML-file, that is – when you use RStudio – displayed in the *viewer pane*, or it can be opened in your webbrowser.

In this blog post, I used screenshots of the created HTML-tables, because else the formatting gets lost in this blog…

We start with using the „standard“ output.

library(sjlabelled) library(sjPlot) # load data, tag NA-values with 'tag.na = TRUE' ess <- read_spss("ESS8e02_1.sav", tag.na = TRUE) # "standard" output. we only use selected variables # for demonstration purposes view_df(ess[, c(1,2,6,8,149,151,532)], max.len = 10)

As you can see, values for *string variables* are not shown by default, as these typically clutter up the output. Furthermore, values for variables with many different values are truncated at some point, to avoid too long tables that are not readable anymore.

Since the functions in **sjPlot** support labelled data, you see both *values* and associated *value labels* in the output, as well as different NA-values, so called *tagged NA’s* (which are often used in SPSS or Stata, less in R, though). Tagged NA’s can also have value labels (e.g. „unknown“, „no answer“ etc.), however, in the above example, the tagged NA-values have no value labels.

Finally, for numeric (continuous) variables that are not labelled, these typically span over a larger range. In such cases, printing all values is not very informative, so `view_df()`

prints the range of these variables instead.

`view_df()`

offers many options, e.g. to add the frequencies of values, the amount of missing values per variable, or even weighted frequencies.

# show many information... view_df( ess[, c(1,2,6,8,149,151,532)], show.na = TRUE, show.type = TRUE, show.frq = TRUE, show.prc = TRUE, show.string.values = TRUE, show.id = TRUE )

Of course you can also use non-labelled data with this function…

# works with non-labelled data as well, of course... view_df(iris, show.frq = TRUE, show.type = TRUE)]]>

This vignette demonstrate how to use quasiquotation in *sjlabelled* to label your data.

Usually, `set_labels()`

can be used to add value labels to variables. The syntax of this function is easy to use, and `set_labels()`

allows to add value labels to multiple variables at once, if these variables share the same value labels.

In the following examples, we will use the `frq()`

function, that shows an extra **label**-column containing *value labels*, if the data is labelled. If the data has *no* value labels, this column is not shown in the output.

```
library(sjlabelled)
library(sjmisc) # for frq()-function
library(rlang)
# unlabelled data
dummies <- data.frame(
dummy1 = sample(1:3, 40, replace = TRUE),
dummy2 = sample(1:3, 40, replace = TRUE),
dummy3 = sample(1:3, 40, replace = TRUE)
)
# set labels for all variables in the data frame
test <- set_labels(dummies, labels = c("low", "mid", "hi"))
attr(test$dummy1, "labels")
#> low mid hi
#> 1 2 3
frq(test, dummy1)
#>
#> # dummy1
#> # total N=40 valid N=40 mean=2.23 sd=0.86
#>
#> val label frq raw.prc valid.prc cum.prc
#> 1 low 11 27.5 27.5 27.5
#> 2 mid 9 22.5 22.5 50.0
#> 3 hi 20 50.0 50.0 100.0
#> NA NA 0 0.0 NA NA
# and set same value labels for two of three variables
test <- set_labels(
dummies, dummy1, dummy2,
labels = c("low", "mid", "hi")
)
frq(test)
#>
#> # dummy1
#> # total N=40 valid N=40 mean=2.23 sd=0.86
#>
#> val label frq raw.prc valid.prc cum.prc
#> 1 low 11 27.5 27.5 27.5
#> 2 mid 9 22.5 22.5 50.0
#> 3 hi 20 50.0 50.0 100.0
#> NA NA 0 0.0 NA NA
#>
#> # dummy2
#> # total N=40 valid N=40 mean=2.10 sd=0.74
#>
#> val label frq raw.prc valid.prc cum.prc
#> 1 low 9 22.5 22.5 22.5
#> 2 mid 18 45.0 45.0 67.5
#> 3 hi 13 32.5 32.5 100.0
#> NA NA 0 0.0 NA NA
#>
#> # dummy3
#> # total N=40 valid N=40 mean=1.98 sd=0.83
#>
#> val frq raw.prc valid.prc cum.prc
#> 1 14 35.0 35.0 35.0
#> 2 13 32.5 32.5 67.5
#> 3 13 32.5 32.5 100.0
#> 0 0.0 NA NA
```

`val_labels()`

does the same job as `set_labels()`

, but in a different way. While `set_labels()`

requires variables to be specified in the `...`

-argument, and labels in the `labels`

-argument, `val_labels()`

requires both to be specified in the `...`

.

`val_labels()`

requires *named* vectors as argument, with the *left-hand side* being the name of the variable that should be labelled, and the *right-hand side* containing the labels for the values.

```
test <- val_labels(dummies, dummy1 = c("low", "mid", "hi"))
attr(test$dummy1, "labels")
#> low mid hi
#> 1 2 3
# remaining variables are not labelled
frq(test)
#>
#> # dummy1
#> # total N=40 valid N=40 mean=2.23 sd=0.86
#>
#> val label frq raw.prc valid.prc cum.prc
#> 1 low 11 27.5 27.5 27.5
#> 2 mid 9 22.5 22.5 50.0
#> 3 hi 20 50.0 50.0 100.0
#> NA NA 0 0.0 NA NA
#>
#> # dummy2
#> # total N=40 valid N=40 mean=2.10 sd=0.74
#>
#> val frq raw.prc valid.prc cum.prc
#> 1 9 22.5 22.5 22.5
#> 2 18 45.0 45.0 67.5
#> 3 13 32.5 32.5 100.0
#> 0 0.0 NA NA
#>
#> # dummy3
#> # total N=40 valid N=40 mean=1.98 sd=0.83
#>
#> val frq raw.prc valid.prc cum.prc
#> 1 14 35.0 35.0 35.0
#> 2 13 32.5 32.5 67.5
#> 3 13 32.5 32.5 100.0
#> 0 0.0 NA NA
```

Unlike `set_labels()`

, `val_labels()`

allows the user to add *different* value labels to different variables in one function call. Another advantage, or difference, of `val_labels()`

is it’s flexibility in defining variable names and value labels by using quasiquotation.

To use quasiquotation, we need the **rlang** package to be installed and loaded. Now we can have labels in a character vector, and use `!!`

to unquote this vector.

```
labels <- c("low_quote", "mid_quote", "hi_quote")
test <- val_labels(dummies, dummy1 = !! labels)
attr(test$dummy1, "labels")
#> low_quote mid_quote hi_quote
#> 1 2 3
```

The same can be done with the names of *variables* that should get new value labels. We then need `!!`

to unquote the variable name and `:=`

as assignment.

```
variable <- "dummy2"
test <- val_labels(dummies, !! variable := c("lo_var", "mid_var", "high_var"))
# no value labels
attr(test$dummy1, "labels")
#> NULL
# value labels
attr(test$dummy2, "labels")
#> lo_var mid_var high_var
#> 1 2 3
```

Finally, we can combine the above approaches to be flexible regarding both variable names and value labels.

```
variable <- "dummy3"
labels <- c("low", "mid", "hi")
test <- val_labels(dummies, !! variable := !! labels)
attr(test$dummy3, "labels")
#> low mid hi
#> 1 2 3
```

`set_label()`

is the equivalent to `set_labels()`

to add variable labels to a variable. The equivalent to `val_labels()`

is `var_labels()`

, which works in the same way as `val_labels()`

. In case of *variable* labels, a `label`

-attribute is added to a vector or factor (instead of a `labels`

-attribute, which is used for *value* labels).

The following examples show how to use `var_labels()`

to add variable labels to the data. We demonstrate this function without further explanation, because it is actually very similar to `val_labels()`

.

```
dummy <- data.frame(
a = sample(1:4, 10, replace = TRUE),
b = sample(1:4, 10, replace = TRUE),
c = sample(1:4, 10, replace = TRUE)
)
# simple usage
test <- var_labels(dummy, a = "first variable", c = "third variable")
attr(test$a, "label")
#> [1] "first variable"
attr(test$b, "label")
#> NULL
attr(test$c, "label")
#> [1] "third variable"
# quasiquotation for labels
v1 <- "First variable"
v2 <- "Second variable"
test <- var_labels(dummy, a = !! v1, b = !! v2)
attr(test$a, "label")
#> [1] "First variable"
attr(test$b, "label")
#> [1] "Second variable"
attr(test$c, "label")
#> NULL
# quasiquotation for variable names
x1 <- "a"
x2 <- "c"
test <- var_labels(dummy, !! x1 := "First", !! x2 := "Second")
attr(test$a, "label")
#> [1] "First"
attr(test$b, "label")
#> NULL
attr(test$c, "label")
#> [1] "Second"
# quasiquotation for both variable names and labels
test <- var_labels(dummy, !! x1 := !! v1, !! x2 := !! v2)
attr(test$a, "label")
#> [1] "First variable"
attr(test$b, "label")
#> NULL
attr(test$c, "label")
#> [1] "Second variable"
```

As we have demonstrated, `var_labels()`

and `val_labels()`

are one of the most flexible and easy-to-use ways to add value and variable labels to our data. Another advantage is the consistent design of all functions in **sjlabelled**, which allows seamless integration into pipe-workflows.

]]>

In this post, I want to demonstrate the different options to calculate and visualize marginal effects from mixed models.

Basically, the type of predictions, i.e. whether to account for the uncertainty of random effects or not, can be set with the `type`

-argument.

The default, `type = "fe"`

, means that predictions are on the *population-level* and do not account for the random effect variances.

```
library(ggeffects)
library(lme4)
data(sleepstudy)
m <- lmer(Reaction ~ Days + (1 + Days | Subject), data = sleepstudy)
pr <- ggpredict(m, "Days")
pr
#>
#> # Predicted values of Reaction
#> # x = Days
#>
#> x predicted std.error conf.low conf.high
#> 0 251.405 6.825 238.029 264.781
#> 1 261.872 6.787 248.570 275.174
#> 2 272.340 7.094 258.435 286.244
#> 3 282.807 7.705 267.705 297.909
#> 5 303.742 9.581 284.963 322.520
#> 6 314.209 10.732 293.174 335.244
#> 7 324.676 11.973 301.210 348.142
#> 9 345.611 14.629 316.939 374.283
#>
#> Adjusted for:
#> * Subject = 0 (population-level)
plot(pr)
```

When `type = "re"`

, the predicted values *are still on the population-level*. However, the random effect variances are taken into account, meaning that the prediction interval becomes larger. More technically speaking, `type = "re"`

accounts for the uncertainty of the fixed effects *conditional on the estimates* of the random-effect variances and conditional modes (BLUPs).

The random-effect variance is the mean random-effect variance. Calculation is based on the proposal from *Johnson et al. 2014*, which is applicable for mixed models with more complex random effects structures.

As can be seen, compared to the previous example with `type = "fe"`

, predicted values are identical (both on the population-level). However, standard errors, and thus the resulting confidence (or prediction) intervals are much larger .

```
pr <- ggpredict(m, "Days", type = "re")
pr
#>
#> # Predicted values of Reaction
#> # x = Days
#>
#> x predicted std.error conf.low conf.high
#> 0 251.405 41.769 169.539 333.271
#> 1 261.872 41.763 180.019 343.726
#> 2 272.340 41.814 190.386 354.293
#> 3 282.807 41.922 200.642 364.972
#> 5 303.742 42.307 220.822 386.661
#> 6 314.209 42.582 230.749 397.669
#> 7 324.676 42.912 240.571 408.781
#> 9 345.611 43.727 259.907 431.315
#>
#> Adjusted for:
#> * Subject = 0 (population-level)
plot(pr)
```

The reason why both `type = "fe"`

and `type = "re"`

return predictions at population-level is because `ggpredict()`

returns predicted values of the response *at specific levels* of given model predictors, which are defined in the data frame that is passed to the `newdata`

-argument (of `predict()`

). The data frame requires data from all model terms, including random effect terms. This again requires to choose certain levels or values also for each random effect term, or to set those terms to zero or `NA`

(for population-level). Since there is no general rule, which level(s) of random effect terms to choose in order to represent the random effects structure in the data, using the population-level seems the most clear and consistent approach.

To get predicted values for a specific level of the random effect term, simply define this level in the `condition`

-argument.

```
ggpredict(m, "Days", type = "re", condition = c(Subject = 330))
#>
#> # Predicted values of Reaction
#> # x = Days
#>
#> x predicted std.error conf.low conf.high
#> 0 275.096 41.769 193.230 356.961
#> 1 280.749 41.763 198.895 362.602
#> 2 286.402 41.814 204.448 368.355
#> 3 292.054 41.922 209.889 374.220
#> 5 303.360 42.307 220.440 386.280
#> 6 309.013 42.582 225.554 392.473
#> 7 314.666 42.912 230.561 398.772
#> 9 325.972 43.727 240.268 411.676
```

Finally, it is possible to obtain predicted values by simulating from the model, where predictions are based on `simulate()`

.

```
pr <- ggpredict(m, "Days", type = "sim")
pr
#>
#> # Predicted values of Reaction
#> # x = Days
#>
#> x predicted conf.low conf.high
#> 0 251.440 200.838 301.996
#> 1 261.860 212.637 311.678
#> 2 272.157 221.595 321.667
#> 3 282.800 233.416 332.738
#> 5 303.770 252.720 353.472
#> 6 314.146 264.651 363.752
#> 7 324.606 273.460 374.462
#> 9 345.319 295.069 394.735
#>
#> Adjusted for:
#> * Subject = 0 (population-level)
plot(pr)
```

For zero-inflated mixed effects models, typically fitted with the **glmmTMB**-package, predicted values can be conditioned on

- the fixed effects of the conditional model only (
`type = "fe"`

) - the fixed effects and zero-inflation component (
`type = "fe.zi"`

) - the fixed effects of the conditional model only (population-level), taking the random-effect variances into account (
`type = "re"`

) - the fixed effects and zero-inflation component (population-level), taking the random-effect variances into account (
`type = "re.zi"`

) - all model parameters (
`type = "sim"`

)

```
library(glmmTMB)
data(Salamanders)
m <- glmmTMB(
count ~ spp + mined + (1 | site),
ziformula = ~ spp + mined,
family = truncated_poisson,
data = Salamanders
)
```

Similar to mixed models without zero-inflation component, `type = "fe"`

and `type = "re"`

for glmmTMB-models (with zero-inflation) both return predictions on the population-level, where the latter option accounts for the uncertainty of the random effects. In short, `predict(..., type = "link")`

is called (however, predictions are finally back-transformed to the response scale).

```
pr <- ggpredict(m, "spp")
pr
#>
#> # Predicted counts of count
#> # x = spp
#>
#> x predicted std.error conf.low conf.high
#> 1 0.935 0.206 0.624 1.400
#> 2 0.555 0.308 0.304 1.015
#> 3 1.171 0.192 0.804 1.704
#> 4 0.769 0.241 0.480 1.233
#> 5 1.786 0.182 1.250 2.550
#> 6 1.713 0.182 1.200 2.445
#> 7 0.979 0.196 0.667 1.437
#>
#> Adjusted for:
#> * mined = yes
#> * site = NA (population-level)
plot(pr)
```

For models with log-link, it make sense to use a log-transformed y-axis as well, to get proportional confidence intervals for the plot. You can do this by using the `log.y`

-argument:

`plot(pr, log.y = TRUE)`

```
ggpredict(m, "spp", type = "re")
#>
#> # Predicted counts of count
#> # x = spp
#>
#> x predicted std.error conf.low conf.high
#> 1 0.935 0.309 0.510 1.714
#> 2 0.555 0.384 0.261 1.180
#> 3 1.171 0.300 0.650 2.107
#> 4 0.769 0.333 0.400 1.478
#> 5 1.786 0.294 1.004 3.175
#> 6 1.713 0.294 0.964 3.045
#> 7 0.979 0.303 0.541 1.772
#>
#> Adjusted for:
#> * mined = yes
#> * site = NA (population-level)
```

For `type = "fe.zi"`

, the predicted response value is the expected value `mu*(1-p)`

*without conditioning* on random effects. Since the zero inflation and the conditional model are working in “opposite directions”, a higher expected value for the zero inflation means a lower response, but a higher value for the conditional model means a higher response. While it is possible to calculate predicted values with `predict(..., type = "response")`

, standard errors and confidence intervals can not be derived directly from the `predict()`

-function. Thus, confidence intervals for `type = "fe.zi"`

are based on quantiles of simulated draws from a multivariate normal distribution (see also *Brooks et al. 2017, pp.391-392* for details).

```
ggpredict(m, "spp", type = "fe.zi")
#>
#> # Predicted counts of count
#> # x = spp
#>
#> x predicted std.error conf.low conf.high
#> 1 0.138 0.045 0.052 0.224
#> 2 0.017 0.009 0.000 0.035
#> 3 0.245 0.072 0.109 0.381
#> 4 0.042 0.018 0.007 0.076
#> 5 0.374 0.108 0.166 0.582
#> 6 0.433 0.117 0.208 0.657
#> 7 0.205 0.063 0.082 0.328
#>
#> Adjusted for:
#> * mined = yes
#> * site = NA (population-level)
```

For `type = "re.zi"`

, the predicted response value is the expected value `mu*(1-p)`

, accounting for the random-effect variances. Prediction intervals are calculated in the same way as for `type = "fe.zi"`

, except that the mean random effect variance is considered for the confidence intervals.

```
ggpredict(m, "spp", type = "re.zi")
#>
#> # Predicted counts of count
#> # x = spp
#>
#> x predicted std.error conf.low conf.high
#> 1 0.138 0.235 0.032 0.354
#> 2 0.017 0.231 0.000 0.054
#> 3 0.245 0.243 0.065 0.609
#> 4 0.042 0.231 0.002 0.126
#> 5 0.374 0.257 0.098 0.932
#> 6 0.433 0.263 0.122 1.060
#> 7 0.205 0.239 0.054 0.510
#>
#> Adjusted for:
#> * mined = yes
#> * site = NA (population-level)
```

Finally, it is possible to obtain predicted values by simulating from the model, where predictions are based on `simulate()`

(see *Brooks et al. 2017, pp.392-393* for details). To achieve this, use `type = "sim"`

.

```
ggpredict(m, "spp", type = "sim")
#>
#> # Predicted counts of count
#> # x = spp
#>
#> x predicted std.error conf.low conf.high
#> 1 1.089 1.288 0 4.131
#> 2 0.292 0.667 0 2.306
#> 3 1.520 1.550 0 5.241
#> 4 0.536 0.946 0 3.087
#> 5 2.212 2.125 0 7.153
#> 6 2.289 2.065 0 7.121
#> 7 1.314 1.367 0 4.697
#>
#> Adjusted for:
#> * mined = yes
#> * site = NA (population-level)
```

- Brooks ME, Kristensen K, Benthem KJ van, Magnusson A, Berg CW, Nielsen A, et al. glmmTMB Balances Speed and Flexibility Among Packages for Zero-inflated Generalized Linear Mixed Modeling. The R Journal. 2017;9: 378–400.
- Johnson PC, O’Hara RB. 2014. Extension of Nakagawa & Schielzeth’s R2GLMM to random slopes models. Methods Ecol Evol, 5: 944-946. (doi: 10.1111/2041-210X.12225)

The past package-update introduced some new features I wanted to describe here: a revised `print()`

-method as well as a new opportunity to plot marginal effects at different levels of random effects in mixed models…

The former print()-method simply showed the first predicted values, including confidence intervals. For numeric predictor variables with many values, you could, for instance, only see the first 10 of more than 100 predicted values. While it makes sense to shorten the (console-)output, there was no information about the predictions for the last or other „representative“ values of the term in question. Now, the print()-method automatically prints a selection of representative values, so you get a quick and clean impression of the range of predicted values for continuous variables:

```
library(ggeffects)
data(efc)
efc$c172code <- as.factor(efc$c172code)
fit <- lm(barthtot ~ c12hour * c172code + neg_c_7, data = efc)
ggpredict(fit, "c12hour")
#>
#> # Predicted values of Total score BARTHEL INDEX
#> # x = average number of hours of care per week
#>
#> x predicted std.error conf.low conf.high
#> 0 72.804 2.516 67.872 77.736
#> 20 68.060 2.097 63.951 72.170
#> 45 62.131 1.824 58.555 65.706
#> 65 57.387 1.886 53.691 61.083
#> 85 52.643 2.179 48.373 56.913
#> 105 47.900 2.626 42.752 53.047
#> 125 43.156 3.164 36.955 49.357
#> 170 32.482 4.531 23.602 41.363
#>
#> Adjusted for:
#> * c172code = 1
#> * neg_c_7 = 11.83
```

If you print predicted values of a term, grouped by the levels of another term (which makes sense in the above example due to the present interaction), the print()-method automatically adjusts the range of printed values to keep the console-output short. In the following example, only 6 instead of 8 values per „block“ are shown:

```
ggpredict(fit, c("c12hour", "c172code"))
#>
#> # Predicted values of Total score BARTHEL INDEX
#> # x = average number of hours of care per week
#>
#> # c172code = 1
#> x predicted std.error conf.low conf.high
#> 0 72.804 2.516 67.872 77.736
#> 30 65.689 1.946 61.874 69.503
#> 55 59.759 1.823 56.186 63.331
#> 85 52.643 2.179 48.373 56.913
#> 115 45.528 2.887 39.870 51.186
#> 170 32.482 4.531 23.602 41.363
#>
#> # c172code = 2
#> x predicted std.error conf.low conf.high
#> 0 76.853 1.419 74.073 79.633
#> 30 68.921 1.115 66.737 71.106
#> 55 62.311 1.122 60.112 64.510
#> 85 54.379 1.438 51.560 57.198
#> 115 46.447 1.934 42.656 50.238
#> 170 31.905 3.007 26.011 37.800
#>
#> # c172code = 3
#> x predicted std.error conf.low conf.high
#> 0 73.862 2.502 68.958 78.766
#> 30 66.925 1.976 63.053 70.798
#> 55 61.145 2.155 56.920 65.369
#> 85 54.208 2.963 48.400 60.016
#> 115 47.271 4.057 39.320 55.222
#> 170 34.554 6.303 22.200 46.907
#>
#> Adjusted for:
#> * neg_c_7 = 11.83
```

Marginal effects can also be calculated for each group level in mixed models. Simply add the name of the related random effects term to the `terms`

-argument, and set `type = "re"`

. In the following example, we fit a linear mixed model and first simply plot the marginal effetcs, *not* conditioned on random effects.

```
library(sjlabelled)
library(lme4)
data(efc)
efc$e15relat <- as_label(efc$e15relat)
m <- lmer(neg_c_7 ~ c12hour + c160age + c161sex + (1 | e15relat), data = efc)
me <- ggpredict(m, terms = "c12hour")
plot(me)
```

To compute marginal effects for each grouping level, add the related random term to the `terms`

-argument. In this case, confidence intervals are not calculated, but marginal effects are conditioned on *each group level* of the random effects.

```
me <- ggpredict(m, terms = c("c12hour", "e15relat"), type = "re")
plot(me)
```

Marginal effects, conditioned on random effects, can also be calculated for specific levels only. Add the related values into brackets after the variable name in the `terms`

-argument.

```
me <- ggpredict(m, terms = c("c12hour", "e15relat [child,cousin]"), type = "re")
plot(me)
```

If the group factor has too many levels, you can also take a random sample of all possible levels and plot the marginal effects for this subsample of group levels. To do this, use `term = "groupfactor [sample=n]"`

.

```
data("sleepstudy")
m <- lmer(Reaction ~ Days + (1 + Days | Subject), data = sleepstudy)
me <- ggpredict(m, terms = c("Days", "Subject [sample=8]"), type = "re")
plot(me)
```

]]>