Overview

The pcFactorStan package for R provides convenience functions and pre-programmed Stan models related to analysis of paired comparison data. Its purpose is to make fitting models using Stan easy and easy to understand. pcFactorStan relies on the rstan package, which should be installed first. See here for instructions on installing rstan.

One situation where a factor model might be useful is when there are people that play in tournaments of more than one game. For example, the computer player AlphaZero (Silver et al. 2018) has trained to play chess, shogi, and Go. We can take the tournament match outcome data for each of these games and find rankings among the players. We may also suspect that there is a latent board game skill that accounts for some proportion of the variance in the per-board game rankings. This proportion can be recovered by the factor model.

Our goal may be to fit a factor model, but it is necessary to build up the model step-by-step. There are essentially three models: ‘unidim’, ‘correlation’, and ‘factor’. ‘unidim’ analyzes a single item. ‘correlation’ is suitable for two or more items. Once you have vetted your items with the ‘unidim’ and ‘correlation’ models, then you can try the ‘factor’ model. There is also a special model ‘unidim_adapt’. Except for this model, the other models require scaling constants. To find appropriate scaling constants, we will fit ‘unidim_adapt’ to each item separately.

Brief tutorial

Physical activity flow propensity

The R code below first loads rstan and pcFactorStan. For extra diagnostics, we also load loo.

library(rstan)
library(pcFactorStan)
library(loo)

Next we take a peek at the data.

head(phyActFlowPropensity)
pa1 pa2 skill predict novelty creative complex goal1 feedback1 chatter waiting body control present spont stakes evaluated reward
mountain biking tennis 1 -1 -2 1 1 1 1 -2 1 1 1 1 1 1 2 0
mountain biking tennis 1 2 -1 -1 -1 0 2 1 2 0 1 0 0 1 2 -1
ice skating running -2 1 -1 -2 -1 1 1 -2 -2 -1 0 0 -1 -1 -1 0
climbing rowing -2 2 -2 -2 -2 0 -1 -1 -1 -1 -1 -1 1 0 0 0
card game gardening 0 0 0 0 2 0 0 0 -2 2 1 0 0 2 -2 2
dance table tennis 0 -2 -1 -1 0 -1 -1 -1 0 0 0 0 0 0 0 1

These data consist of paired comparisons of 87 physical activities on 16 flow-related facets. Participants submitted two activities using free-form input. These activities were substituted into item templates. For example, Item predict consisted of the prompt, “How predictable is the action?” with response options:

  • A1 is much more predictable than A2.
  • A1 is somewhat more predictable than A2.
  • Both offer roughly equal predictability.
  • A2 is somewhat more predictable than A1.
  • A2 is much more predictable than A1.

If the participant selected ‘golf’ and ‘running’ for activities then ‘golf’ was substituted into A1 and ‘running’ into A2. Duly prepared, the item was presented and the participant asked to select the most plausible statement.

A somewhat more response is scored 1 or -1 and much more scored 2 or -2. A tie (i.e. roughly equal) is scored as zero. We will need to analyze each item separately before we analyze them together. Therefore, we will start with Item skill.

Data must be fed into Stan in a partially digested form. The next block of code demonstrates how a suitable data list may be constructed using the prepData() function. This function automatically determines the number of threshold parameters based on the range observed in your data. One thing it does not do is pick a varCorrection factor. The varCorrection determines the degree of adaption in the model. Usually some choice between 2.0 to 4.0 will obtain optimal results.

dl <- prepData(phyActFlowPropensity[,c(paste0('pa',1:2), 'skill')])
dl$varCorrection <- 2.0

Next we fit the model using the pcStan() function, which is a wrapper for stan() from rstan. We also choose the number of chains. As is customary Stan procedure, the first half of each chain is used to estimate the sampler’s weight matrix (i.e. warm up) and excluded from inference.

fit1 <- pcStan("unidim_adapt", data=dl)

A variety of diagnostics are available to check whether the sampler ran into trouble.

check_hmc_diagnostics(fit1)
#> 
#> Divergences:
#> 0 of 4000 iterations ended with a divergence.
#> 
#> Tree depth:
#> 0 of 4000 iterations saturated the maximum tree depth of 10.
#> 
#> Energy:
#> E-BFMI indicated no pathological behavior.

Everything looks good, but there are a few more things to check. We want \(\widehat R\) < 1.015 and effective sample size greater than 100 times the number of chains (Vehtari et al., 2019).

allPars <- summary(fit1, probs=c())$summary 
print(min(allPars[,'n_eff']))
#> [1] 675.4
print(max(allPars[,'Rhat']))
#> [1] 1.005

Again, everything looks good. If the target values were not reached then we would sample the model again with more iterations. Time for a plot,

library(ggplot2)

theta <- summary(fit1, pars=c("theta"), probs=c())$summary[,'mean']

ggplot(data.frame(x=theta, activity=dl$nameInfo$pa, y=0.47)) +
  geom_point(aes(x=x),y=0) +
  geom_text(aes(label=activity, x=x, y=y),
            angle=85, hjust=0, size=2,
            position = position_jitter(width = 0, height = 0.4)) + ylim(0,1) +
  theme(legend.position="none",
        axis.title.x=element_blank(),
        axis.title.y=element_blank(),
        axis.text.y=element_blank(),
        axis.ticks.y=element_blank())

Intuitively, this seems like a fairly reasonable ranking for skill. As pretty as the plot is, the main reason that we fit this model was to find a scaling constant to produce a standard deviation close to 1.0,

s50 <- summary(fit1, pars=c("scale"), probs=c(.5))$summary[,'50%']
print(s50)
#> [1] 0.5877

We use the median instead of the mean because scale is not likely to have a symmetric marginal posterior distribution. We obtained 0.5877, but that value is just for one item. We have to perform the same procedure for every item. Wow, that would be really tedious … if we did not have a function to do it for us! Fortunately, calibrateItems takes care of it and produces a table of the pertinent data,

result <- calibrateItems(phyActFlowPropensity, iter=1000L) 
print(result)
item iter divergent treedepth low_bfmi n_eff Rhat scale thetaVar
skill 1000 0 0 0 492.46 1.007 0.5901 1.0213
predict 1000 0 0 0 641.66 1.003 0.5686 0.9963
novelty 1000 0 0 0 774.62 1.004 0.4728 0.8811
creative 1000 0 0 0 690.77 1.004 0.4692 0.8765
complex 1000 0 0 0 467.95 1.007 0.5349 0.9566
goal1 1000 0 0 1 94.72 1.030 0.0869 0.2848
feedback1 3375 0 0 0 435.95 1.002 0.1567 0.4220
chatter 1000 0 0 0 1024.80 1.002 0.2468 0.5712
waiting 1000 0 0 0 532.23 1.003 0.4969 0.9106
body 1000 0 0 0 960.07 1.003 0.3628 0.7385
control 1000 0 0 0 1030.68 1.004 0.3149 0.6719
present 1000 0 0 0 1059.55 1.005 0.2365 0.5551
spont 1000 0 0 0 1231.19 1.001 0.2622 0.5947
stakes 1000 0 0 0 889.16 1.001 0.2801 0.6214
evaluated 1500 0 0 0 988.22 1.004 0.4423 0.8427
reward 1000 0 0 0 898.49 1.012 0.2165 0.5234

Item goal1 ran into trouble. A nonzero count of divergent transitions or low_bfmi means that the item contained too little signal to estimate. Item feedback1 is also prone to failure. We could try again with varCorrection=1.0, but we are going to exclude these items instead. The model succeeded on the rest of the items. I requested iter=1000L to demonstrate how calibrateItems will resample the model until the n_eff is large enough and the Rhat small enough. As demonstrated in the iter column, some items needed more than 1000 samples to converge.

Next we will fit the correlation model. We exclude the Cholesky factor of the correlation matrix rawThetaCorChol because the regular correlation matrix is also output.

pafp <- phyActFlowPropensity 
excl <- match(c('goal1','feedback1'), colnames(pafp))
pafp <- pafp[,-excl]
dl <- prepData(pafp)
dl$scale <- result[-excl,'scale'] 
fit2 <- pcStan("correlation", data=dl, include=FALSE, pars=c('rawTheta', 'rawThetaCorChol'))
check_hmc_diagnostics(fit2)
#> 
#> Divergences:
#> 0 of 4000 iterations ended with a divergence.
#> 
#> Tree depth:
#> 0 of 4000 iterations saturated the maximum tree depth of 10.
#> 
#> Energy:
#> E-BFMI indicated no pathological behavior.

allPars <- summary(fit2, probs=0.5)$summary 
print(min(allPars[,'n_eff']))
#> [1] NaN
print(max(allPars[,'Rhat']))
#> [1] NaN

The HMC diagnostics look good, but … oh dear! Something is wrong with the n_eff and \(\widehat R\). Let us look more carefully,

head(allPars[order(allPars[,'sd']),])
#>               mean   se_mean        sd 50%  n_eff  Rhat
#> thetaCor[1,1]    1       NaN 0.000e+00   1    NaN   NaN
#> thetaCor[3,3]    1 1.264e-18 6.303e-17   1 2487.0 0.999
#> thetaCor[2,2]    1 1.030e-18 6.347e-17   1 3798.2 0.999
#> thetaCor[4,4]    1 4.778e-18 6.925e-17   1  210.1 0.999
#> thetaCor[5,5]    1 1.468e-18 7.042e-17   1 2302.7 0.999
#> thetaCor[7,7]    1 1.512e-18 7.725e-17   1 2609.9 0.999

Ah ha! It looks like all the entries of the correlation matrix are reported, including the entries that are not stochastic but are fixed to constant values. We need to filter those out to get sensible results.

allPars <- allPars[allPars[,'sd'] > 1e-6,] 
print(min(allPars[,'n_eff']))
#> [1] 971.2
print(max(allPars[,'Rhat']))
#> [1] 1.003

Ah, much better. Now we can inspect the correlation matrix. There are many ways to visualize a correlation matrix. One of my favorite ways is to plot it using the qgraph package,

covItemNames <- dl$nameInfo$item 
tc <- summary(fit2, pars=c("thetaCor"), probs=c(.5))$summary[,'50%']
tcor <- matrix(tc, length(covItemNames), length(covItemNames))
dimnames(tcor) <- list(covItemNames, covItemNames)

library(qgraph)
#> Registered S3 methods overwritten by 'huge':
#>   method    from   
#>   plot.sim  BDgraph
#>   print.sim BDgraph
qgraph(tcor, layout = "spring", graph = "cor", labels=colnames(tcor),
       legend.cex = 0.3,
       cut = 0.3, maximum = 1, minimum = 0, esize = 20,
       vsize = 7, repulsion = 0.8, negDashed=TRUE, theme="colorblind")