Use this if you are using igraph from R
scg_group {igraph} | R Documentation |
This function solves the Spectral Coarse Graining (SCG) problem; either exactly, or approximately but faster.
scg_group(
V,
nt,
mtype = c("symmetric", "laplacian", "stochastic"),
algo = c("optimum", "interv_km", "interv", "exact_scg"),
p = NULL,
maxiter = 100
)
V |
A numeric matrix of (eigen)vectors to be preserved by the coarse
graining (the vectors are to be stored column-wise in |
nt |
A vector of positive integers of length one or equal to
|
mtype |
The type of semi-projectors used in the SCG. For now “symmetric”, “laplacian” and “stochastic” are available. |
algo |
The algorithm used to solve the SCG problem. Possible values are “optimum”, “interv_km”, “interv” and “exact_scg”. |
p |
A probability vector of length equal to |
maxiter |
A positive integer giving the maximum number of iterations of
the k-means algorithm when |
The algorithm “optimum” solves exactly the SCG problem for each
eigenvector in V
. The running time of this algorithm is O(\max
nt \cdot m^2)
for the symmetric and laplacian matrix
problems (i.e. when mtype
is “symmetric” or
“laplacian”. It is O(m^3)
for the stochastic problem. Here
m
is the number of rows in V
. In all three cases, the memory
usage is O(m^2)
.
The algorithms “interv” and “interv_km” solve approximately
the SCG problem by performing a (for now) constant binning of the components
of the eigenvectors, that is nt[i]
constant-size bins are used to
partition V[,i]
. When algo
= “interv_km”, the (Lloyd)
k-means algorithm is run on each partition obtained by “interv” to
improve accuracy.
Once a minimizing partition (either exact or approximate) has been found for
each eigenvector, the final grouping is worked out as follows: two vertices
are grouped together in the final partition if they are grouped together in
each minimizing partition. In general the size of the final partition is not
known in advance when ncol(V)
>1.
Finally, the algorithm “exact_scg” groups the vertices with equal components in each eigenvector. The last three algorithms essentially have linear running time and memory load.
A vector of nrow(V)
integers giving the group label of each
object (vertex) in the partition.
David Morton de Lachapelle david.morton@epfl.ch, david.mortondelachapelle@swissquote.ch
D. Morton de Lachapelle, D. Gfeller, and P. De Los Rios, Shrinking Matrices while Preserving their Eigenpairs with Application to the Spectral Coarse Graining of Graphs. Submitted to SIAM Journal on Matrix Analysis and Applications, 2008. http://people.epfl.ch/david.morton
scg-method for a detailed introduction. scg
,
scg_eps
## We are not running these examples any more, because they
## take a long time to run and this is against the CRAN repository
## policy. Copy and paste them by hand to your R prompt if
## you want to run them.
## Not run:
# eigenvectors of a random symmetric matrix
M <- matrix(rexp(10^6), 10^3, 10^3)
M <- (M + t(M))/2
V <- eigen(M, symmetric=TRUE)$vectors[,c(1,2)]
# displays size of the groups in the final partition
gr <- scg_group(V, nt=c(2,3))
col <- rainbow(max(gr))
plot(table(gr), col=col, main="Group size", xlab="group", ylab="size")
## comparison with the grouping obtained by kmeans
## for a partition of same size
gr.km <- kmeans(V,centers=max(gr), iter.max=100, nstart=100)$cluster
op <- par(mfrow=c(1,2))
plot(V[,1], V[,2], col=col[gr],
main = "SCG grouping",
xlab = "1st eigenvector",
ylab = "2nd eigenvector")
plot(V[,1], V[,2], col=col[gr.km],
main = "K-means grouping",
xlab = "1st eigenvector",
ylab = "2nd eigenvector")
par(op)
## kmeans disregards the first eigenvector as it
## spreads a much smaller range of values than the second one
### comparing optimal and k-means solutions
### in the one-dimensional case.
x <- rexp(2000, 2)
gr.true <- scg_group(cbind(x), 100)
gr.km <- kmeans(x, 100, 100, 300)$cluster
scg_eps(cbind(x), gr.true)
scg_eps(cbind(x), gr.km)
## End(Not run)