This model driver can be used to cluster data using the binomial distribution.
FLXMCregbinom(formula = . ~ ., size = NULL, alpha = 0, eps = 0)
A formula which is interpreted relative to the
formula specified in the call to flexmix::flexmix()
using
stats::update.formula()
. Only the left-hand side (response)
of the formula is used. Default is to use the original model
formula specified in flexmix::flexmix()
.
Number of trials (one or more).
A non-negative scalar acting as regularization
parameter. Can be regarded as adding
alpha
observations equal to the population mean to each
component.
A numeric value in [0, 1). When greater than zero, probabilities are truncated to be within in [eps, 1-eps].
an object of class "FLXC"
Using a regularization parameter alpha
greater than zero can be
viewed as adding alpha
observations equal to the population mean
to each component. This can be used to avoid degenerate solutions
(i.e., probabilites of 0 or 1). It also has the effect that
clusters become more similar to each other the larger alpha
is
chosen. For small values this effect is, however, mostly
negligible.
Parameter estimation is achieved using the MAP estimator for each component and variable using a Beta prior.
Ernst, D, Ortega Menjivar, L, Scharl, T, Grün, B (2025). Ordinal Clustering with the flex-Scheme. Austrian Journal of Statistics. Submitted manuscript.
library("flexmix")
library("flexord")
library("flexclust")
# Sample data
k <- 4 # nr of clusters
size <- 4 # nr of trials
N <- 100 # obs. per cluster
set.seed(0xdeaf)
# random probabilities per component
probs <- lapply(seq_len(k), \(ki) runif(10, 0.01, 0.99))
# sample data
dat <- lapply(probs, \(p) {
lapply(p, \(p_i) {
rbinom(N, size, p_i)
}) |> do.call(cbind, args=_)
}) |> do.call(rbind, args=_)
true_clusters <- rep(1:4, rep(N, k))
# Cluster without regularization
m1 <- stepFlexmix(dat~1, model=FLXMCregbinom(size=size, alpha=0), k=k)
#> 4 : * * *
# Cluster with regularization
m2 <- stepFlexmix(dat~1, model=FLXMCregbinom(size=size, alpha=1), k=k)
#> 4 : * * *
# Both models are mostly able to reconstruct the true clusters (ARI ~ 0.96)
# (it's a very easy clustering problem)
# Small values for the regularization don't seem to affect the ARI (much)
randIndex(clusters(m1), true_clusters)
#> ARI
#> 0.9669515
randIndex(clusters(m2), true_clusters)
#> ARI
#> 0.9669515