Ordered logistic regression, adjacent model with category-specific effects (original) (raw)

November 7, 2018, 4:58pm 1

I am new in Bayesian Statistics and the “brms” package. I ran an ordinal logistic regression, family adjacent with category-specific effects to predict GPA (ordinal variable with five levels (less than 5.9, 6-6.9, 7-7.9, 8-8.9 and 9-10) using 2 precitors: Family income (ordinal variable with five levels) and Cognitive Reflection (continous variable:IRT scores).

I specified the model as following:

model <- brm(formula = GPA ~ cs(Gender) + cs(Family_Income) ,data=datos1, family = acat())

Nevertheless, I have had problems while interpreting the results. Taking into account that the 95% CI of the following exclude zero:
Family Income[1,2] & -0.78 & -1.4 & -0.12
Cognitive Reflection[4]& 0.43 & 0.2 & 0.68

-----------------------------------------------------RESULTS---------------------------------------------------------------

\begin{table}[H]
\caption{Ordinal Logistic Regression}
Variable & Estimate & l-95% CI & u-95% CI \
Family Income[1,1] & -0.26 & -1.39 & 0.87 \
Family Income[1,2] & -0.78 & -1.43 & -0.12\
Family Income[1,3] & 0.11 & -0.49 & 0.72 \
Family Income[1,4] & 0.34 & -0.31 & 0.96 \
Family Income[2,1] & -0.11 & -1.12 & 0.98 \
Family Income[2,2] & -0.25 & -0.85 & 0.34 \
Family Income[2,3] & 0.27 & -0.26 & 0.83 \
Family Income[2,4] & -0.52 & -1.11 & 0.04 \
Family Income[3,1] & 0.02 & -0.93 & 0.94 \
Family Income[3,2] & -0.05 & -0.54 & 0.46 \
Family Income[3,3] & 0.15 & -0.29 & 0.63 \
Family Income[3,4] & -0.28 & -0.77 & 0.18 \
Family Income[4,1] & -0.19 & -1.03 & 0.66 \
Family Income[4,2] & 0.17 & -0.25 & 0.61 \
Family Income[4,3] & -0.08 & -0.46 & 0.28 \
Family Income[4,4] & -0.15 & -0.53 & 0.24 \
Cognitive Reflection[1]& 0.2 & -0.45 & 0.93 \
Cognitive Reflection[2]& -0.06 & -0.38 & 0.26 \
Cognitive Reflection[3]& 0.17 & -0.1 & 0.45 \
Cognitive Reflection[4]&0.43 & 0.2 & 0.68 \

image

image

First of all, I wouldn’t engage in dichotomous thinking too much. In other words, don’t over-interpret the arbitrary threshold of 5%.

Then, you should check that you are not overfitting by using category-specific effects. I would recommend running one model with just standard effects and then compare both models using loo.

You may find some guidance in one of my papers: https://psyarxiv.com/x8swp/

Thank you for your reply. Actually, I performed a model comparison between two models: a standard effects model and a category-specific effects model, and found that the former had the lowest LOO value.

Model & LOOIC       &  SE
Model 1 (Standard effects) &  2955.92   & 37.52
Model 2 (Specific effects)   & 2976.33    &  42.02
Model 1 - Model 2 &-20.41 & 15.33

Nevertheless, I am still unsure which marginal effects plots are the correct ones to report.

  1. marginal_effects(model) = When I ran this, a warning appeared: Warning messages:
    1: Predictions are treated as continuous variables in ‘marginal_effects’ by default, which is likely invalid for ordinal families. Please set ‘categorical’ to TRUE.
  2. marginal_effects(model, categorical =TRUE) =
  3. marginal_effects(model, ordinal = TRUE) =
    (The last two show the change in probability to answer certain category option, and it is clear that the change in probability is more evident in certain options (e.g. 3 and 4)). Does this finding suggest that a categoric-specific model could be more plausible?

Thanks

marginaleffects

marginal_effects(categorical)

marginaleffects_ordinal

The results may indicate that you are (slightly) overfitting when using category-specific effects in all of your predictiors. That doesn’t mean they don’t capure something relevant between 3 and 4.

How does marginal_effects(model, categorical =TRUE) look like for the standard model?

marginal_effects(model, categorical =TRUE) looks like this. Where the impact of the predictor, is more clear in some response categories.

marginal_effects(categorical)

Is this the plot for that standard model? If yes, it looks pretty similar to the one from the category-specific model, doesn’t it?

Actually, they are very similar. The plot marginal_effects (specificeffects_model, categorical =TRUE) is shown below.
In conclusion, Should I select a standard effect model, although the graph is showing an effect in certain response categories?
If yes, how should I report this finding?
Thanks

Rplot

I think reporting the results of the model comparison via loo, and then the coefficients of the standard model would be a reasonable choice. You may report the results of the category-specific model, as well, but state that this model is probably overfitting the data.

GRH15 May 2, 2025, 5:06pm 10

Hi Paul,

So sorry for this follow-up after such a loooong time. I’m wondering if it is possible to calculate the marginal effect of a given predictor on the dependent variable. For instance, for the data described here, is it possible to calculate how Gender (male vs female) will make the GPA higher or lower? I tried avg_comparison for my adjacent-category model but the outputs were grouped by rating categories.