This page contains the R scripts to replicate the results of the paper “Learning to learn in Collective Self-adaptive Systems: Automated Reasoning for System Design Patterns” (under revision at eCAS2020).
The following packages has been used for the analysis:
if (!require("pacman")) install.packages("pacman")
pacman::p_load(pacman, rio, tidyverse, cluster, fpc, ggplot2, reshape2, purrr, dplyr, dendextend, PCAmixdata, klaR, factoextra, bootcluster, kmed, FactoMineR, factoextra, corrplot,ExPosition,ape,circlize, kableExtra, knitr)
Dataset
To visualize the dataset:
kable(dataset) %>%
kable_styling(full_width = F, font_size = 10, bootstrap_options = c("striped", "hover", "condensed"))
ID | Title | Application Domain | Emergent Behaviour | Cooperative (agent level) | Behaviour | Autonomy | Knowledge Access | Trigger - first | Trigger - update | Technique |
---|---|---|---|---|---|---|---|---|---|---|
1 | Pervasive Self-Learning with Multi-modal Distributed Sensors | CPS | No | Yes | Selfish but collaborative | Restricted Autonomy | Neighborhood | No initial knowledge (random) | Periodic | Probabilistic |
2 | Distributed W-Learning: Multi-Policy Optimization in Self-Organizing Systems | Traffic | No | Yes | Altruistic locally / Selfish globally | Full Autonomy | Neighborhood | No initial knowledge (random) | Not mentioned | Reinforcement Learning |
3 | Self-organized Fault-tolerant Routing in Peer-to-Peer Overlays | Network | Yes | No | Selfish | Full Autonomy | Minimal | From peers and other agents | Periodic | Reinforcement Learning |
4 | Self-organizing Bandwidth Sharing in Priority-Based Medium Access | Network | Yes | No | Selfish | Full Autonomy | Limited | No initial knowledge (random) | Action | Game Theory |
5 | Incremental Social Learning Applied to a Decentralized Decision-Making Mechanism: Collective Learning Made Faster | Other | Yes | No | Selfish | Full Autonomy | Minimal | From peers and other agents | Not mentioned | Statistics |
6 | Simulating Human Single Motor Units Using Self-Organizing Agents | Other | No | Yes | Altruistic | Full Autonomy | Limited | No initial knowledge (random) | Periodic | Evolutionary Process |
7 | Learning to be Different: Heterogeneity and Efficiency in Distributed Smart Camera Networks | CPS | Yes | No | Selfish | Full Autonomy | Maximal | No initial knowledge (random) | Periodic | Reinforcement Learning |
8 | Self-Organizational Reciprocal Agents for Conflict Avoidance in Allocation Problems | Other | No | Yes | Selfish but collaborative | Full Autonomy | Limited | No initial knowledge (random) | Periodic | Reinforcement Learning |
9 | A Mutual Influence Detection Algorithm for Systems with Local Performance Measurement | CPS | Yes | No | Selfish | Full Autonomy | Neighborhood | No initial knowledge (random) | Periodic | Reinforcement Learning |
10 | Towards Dynamic Epistemic Learning of Actions in Autonomic Multi-agent Systems | Other | No | No | Selfish | Full Autonomy | Maximal | No initial knowledge (random) | Task/Episode | Applied Logic |
11 | Cooperative Resource Allocation in Open Systems of Systems | CPS | No | No | Both versions explored | Full Autonomy | Tunable | Domain knowledge / humans | Learning task threshold achieved | Supervised Learning |
12 | Multiagent Reinforcement Social Learning Toward Coordination in Cooperative Multiagent Systems | Cooperative Game | No | No | Both versions explored | Full Autonomy | Neighborhood | No initial knowledge (random) | Task/Episode | Reinforcement Learning |
13 | Efficient and Robust Emergence of Norms Through Heuristic Collective Learning | Cooperative Game | No | No | Altruistic | Full Autonomy | Neighborhood | No initial knowledge (random) | Task/Episode | Game Theory |
14 | Reinforcement Learning of Informed Initial Policies for Decentralized Planning | CPS | No | No | Selfish | Full Autonomy | Minimal | From peers and other agents | Learning task threshold achieved | Reinforcement Learning |
15 | Prediction-Based Multi-Agent Reinforcement Learning in Inherently Non-Stationary Environments | CPS | Yes | No | Selfish | Full Autonomy | Minimal | Domain knowledge / humans | Periodic | Reinforcement Learning |
16 | A Reinforcement Learning Approach for Interdomain Routing with Link Prices | Network | Yes | No | Selfish | Full Autonomy | Minimal | No initial knowledge (random) | Action | Reinforcement Learning |
17 | Machine Learning in Disruption-tolerant MANETs | Network | No | Yes | Altruistic (collaborative) | Full Autonomy | Neighborhood | No initial knowledge (random) | Social interaction | Probabilistic |
18 | Mobilized ad-hoc networks: a reinforcement learning approach | Network | No | Yes | Altruistic (collaborative) | Full Autonomy | Neighborhood | No initial knowledge (random) | Social interaction | Reinforcement Learning |
19 | Autonomous smart routing for network QoS | Network | No | Yes | Altruistic (collaborative) | Full Autonomy | Neighborhood | No initial knowledge (random) | Action | Reinforcement Learning |
20 | Decentralized Bayesian Reinforcement Learning for Online Agent Collaboration | CPS | No | Yes | Altruistic (collaborative) | Full Autonomy | Neighborhood | No initial knowledge (random) | Social interaction | Reinforcement Learning |
21 | Modeling Assistant s Autonomy Constraints As a Means for Improving Autonomous Assistant-Agent Design | Market | No | Yes | Selfish | Full Autonomy | Minimal | Not mentioned | Action | Supervised Learning |
22 | Adaptivity at Every Layer: A Modular Approach for Evolving Societies of Learning Autonomous Systems | CPS | No | Yes | Altruistic (collaborative) | Full Autonomy | Minimal | Not mentioned | Social interaction | Reinforcement Learning |
23 | Bayesian Interaction Shaping: Learning to Influence Strategic Interactions in Mixed Robotic Domains | CPS | No | No | Altruistic locally / Selfish globally | Full Autonomy | Limited | Domain knowledge / humans | Action | Probabilistic |
24 | Resource Abstraction for Reinforcement Learning in Multiagent Congestion Problems | Traffic | Yes | No | Selfish | Full Autonomy | Minimal | No initial knowledge (random) | Not mentioned | Reinforcement Learning |
25 | Multiagent Reinforcement Learning and Self-organization in a Network of Agents | Distributed Task Allocation Problem | No | Yes | Selfish but collaborative | Full Autonomy | Minimal | No initial knowledge (random) | Action | Reinforcement Learning |
26 | Batch Reinforcement Learning in a Complex Domain | CPS | No | Yes | Selfish but collaborative | Full Autonomy | Maximal | Domain knowledge / humans | Learning task threshold achieved | Reinforcement Learning |
27 | Co-evolution of Agent Strategies in N-player Dilemmas | Cooperative Game | Yes | No | Selfish | Full Autonomy | Neighborhood | No initial knowledge (random) | Learning task threshold achieved | Game Theory |
28 | Self-organisation in an Agent Network via Learning | Distributed Task Allocation Problem | No | Yes | Selfish but collaborative | Full Autonomy | Neighborhood | No initial knowledge (random) | Action | Reinforcement Learning |
29 | Self-organization for Coordinating Decentralized Reinforcement Learning | Distributed Task Allocation Problem | Yes | No | Selfish | Full Autonomy | Tunable | No initial knowledge (random) | Action | Reinforcement Learning |
30 | Adjustable Autonomy in Real-world Multi-agent Environments | Other | No | Yes | Altruistic | Restricted Autonomy | Maximal | Domain knowledge / humans | Task/Episode | Reinforcement Learning |
31 | How Autonomy Oriented Computing (AOC) Tackles a Computationally Hard Optimization Problem | Cooperative Game | No | Yes | Altruistic | Full Autonomy | Maximal | Domain knowledge / humans | Action | Game Theory |
32 | A Bartering Approach to Improve Multiagent Learning | Other | No | Yes | Selfish | Full Autonomy | Maximal | Domain knowledge / humans | Action | Supervised Learning |
33 | Learning Sequences of Actions in Collectives of Autonomous Agents | CPS | Yes | No | Selfish | Full Autonomy | Minimal | No initial knowledge (random) | Action | Reinforcement Learning |
34 | Learning and Decision: Making for Intention Reconciliation | Market | No | Yes | Selfish | Restricted Autonomy | Minimal | No initial knowledge (random) | Action | Reinforcement Learning |
35 | Continuous Collaboration: A Case Study on the Development of an Adaptive Cyber-physical System | CPS | No | Yes | Altruistic (collaborative) | Full Autonomy | Maximal | No initial knowledge (random) | Learning task threshold achieved | Reinforcement Learning |
36 | RPLLEARN: Extending an Autonomous Robot Control Language to Perform | CPS | No | No | Selfish | Full Autonomy | Minimal | No initial knowledge (random) | Task/Episode | Statistics |
37 | Coordination Through Mutual Notification in Cooperative Multiagent Reinforcement Learning | CPS | No | No | Altruistic | Full Autonomy | Limited | No initial knowledge (random) | Task/Episode | Reinforcement Learning |
38 | On Topic Selection Strategies in Multi-agent Naming Game | Cooperative Game | Yes | No | Selfish | Full Autonomy | Minimal | No initial knowledge (random) | Action | Game Theory |
39 | Inter-institutional Social Capital for Self-Organising Nested Enterprises | CPS | Yes | Yes | Selfish | Full Autonomy | Minimal | No initial knowledge (random) | Learning task threshold achieved | Supervised Learning |
40 | Dealing with Unforeseen Situations in the Context of Self-Adaptive Urban Traffic Control: How to Bridge the Gap | Traffic | No | No | Altruistic locally / Selfish globally | Full Autonomy | Minimal | No initial knowledge (random) | Learning task threshold achieved | Reinforcement Learning |
41 | Decentralised Progressive Signal Systems for Organic Traffic Control | Traffic | No | Yes | Altruistic (collaborative) | Full Autonomy | Neighborhood | Not mentioned | Learning task threshold achieved | Reinforcement Learning |
42 | Learning in Open Adaptive Networks | Distributed Task Allocation Problem | Yes | No | Selfish | Full Autonomy | Minimal | Not mentioned | Not mentioned | Reinforcement Learning |
43 | A Machine Learning Approach to Performance Prediction of Total Order Broadcast Protocols | Network | No | No | Selfish | Full Autonomy | Minimal | Not mentioned | Not mentioned | Supervised Learning |
44 | Self-Adaptive Dissemination of Data in Dynamic Sensor Networks | Network | Yes | No | Selfish | Full Autonomy | Limited | Not mentioned | Not mentioned | Reinforcement Learning |
45 | Autonomic Multi-policy Optimization in Pervasive Systems: Overview and Evaluation | Traffic | No | Yes | Altruistic | Restricted Autonomy | Neighborhood | Not mentioned | Periodic | Reinforcement Learning |
46 | Self-Organising Zooms for Decentralised Redundancy Management in Visual Sensor Networks | CPS | No | Yes | Altruistic | Restricted Autonomy | Limited | No initial knowledge (random) | Periodic | Reinforcement Learning |
47 | Towards Data-centric Control of Sensor Networks through Bayesian Dynamic Linear Modelling | CPS | Yes | No | Selfish | Full Autonomy | Minimal | No initial knowledge (random) | Periodic | Probabilistic |
48 | Firefly-Inspired Synchronization for Improved Dynamic Pricing in Online Markets | Market | No | Yes | Selfish but collaborative | Full Autonomy | Maximal | No initial knowledge (random) | Periodic | Swarm System |
49 | Decentralized Approaches for Self-adaptation in Agent Organizations | Other | No | Yes | Selfish but collaborative | Full Autonomy | Neighborhood | No initial knowledge (random) | Periodic | Reinforcement Learning |
50 | Static Dynamic and Adaptive Heterogeneity in Distributed Smart Camera Networks | CPS | No | Yes | Selfish but collaborative | Full Autonomy | Neighborhood | No initial knowledge (random) | Task/Episode | Swarm System |
51 | Distributed Cooperation in Wireless Sensor Networks | CPS | No | Yes | Selfish but collaborative | Full Autonomy | Neighborhood | Not mentioned | Periodic | Game Theory and Reinforcement Learning |
52 | Prosumers as Aggregators in the DEZENT Context of Regenerative Power Production | CPS | Yes | No | Selfish | Restricted Autonomy | Tunable | Not mentioned | Periodic | Reinforcement Learning |
53 | Goal-Aware Team Affiliation in Collectives of Autonomous Robots | CPS | Yes | Yes | Altruistic | Full Autonomy | Limited | No initial knowledge (random) | Action | Reinforcement Learning |
54 | Decentralized Collective Learning for Self-Managed Sharing Economies | CPS | Yes | Yes | Tunable | Full Autonomy | Minimal | Domain knowledge / humans | Action | Gradiend Descend |
55 | Constructivist Approach to State Space Adaptation in Reinforcement Learning | Traffic | No | No | Selfish | Full Autonomy | Minimal | Domain knowledge / humans | Task/Episode | Reinforcement Learning |
56 | TSLAM: A Trust-Enabled Self-Learning Agent Model for Service Matching in the Cloud Market | Market | No | Yes | Selfish | Full Autonomy | Minimal | Domain knowledge / humans | Periodic | Supervised Learning |
57 | Autonomous Management of Energy-Harvesting IoT Nodes Using Deep Reinforcement Learning | CPS | Yes | No | Selfish | Full Autonomy | Minimal | Domain knowledge / humans | Task/Episode | Reinforcement Learning |
58 | Reinforcement Learning for Cooperative Overtaking | Traffic | Yes | Yes | Altruistic | Full Autonomy | Neighborhood | Domain knowledge / humans | Periodic | Reinforcement Learning |
59 | New quantum-genetic based OLSR protocol (QG-OLSR) for Mobile Ad hoc Network | Network | No | No | Altruistic | Full Autonomy | Maximal | From peers and other agents | Learning task threshold achieved | Genetic Algorithm and Reinforcement Learning |
The following packages has been used for the analysis:
if (!require("pacman")) install.packages("pacman")
pacman::p_load(pacman, rio, tidyverse, cluster, fpc, ggplot2, reshape2, purrr, dplyr, dendextend, PCAmixdata, klaR, factoextra, bootcluster, kmed, FactoMineR, factoextra, corrplot,ExPosition,ape,circlize, kableExtra, knitr)
Clustering Analysis
The notion of similarity between papers refersto similarity between their attributes (see dataset). Since the attributes identified by our classification are categorical, we adopt the Gower Distance measure.
gower.dist <- daisy(x, metric = c("gower"))
HAC starts by treating each observation (i.e.,paper) as a separate cluster. Then, it repeatedlyidentifies and merge the two most similar clusters.
aggl.clust.c <- hclust(gower.dist, method = "complete")
plot(aggl.clust.c,cex = 0.7)
There are two methods allowing establishing evaluation criteria for the number ofclusters K: silhouette values and the bootstrap method.
Silhouette Analysis:
ggplot(data = data.frame(t(cstats.table(gower.dist, aggl.clust.c, 10))),
aes(x=cluster.number, y=avg.silwidth)) +
geom_point(size=1.5)+
geom_line(size=0.5)+
scale_x_continuous(breaks = scales::pretty_breaks(n = 10), limits=c(2, 10)) +
scale_y_continuous(breaks = scales::pretty_breaks(n = 10)) +
ggtitle("") +
labs(x = "K", y = "Average silhouette width") +
theme_minimal(base_size = 15) +
geom_hline(yintercept=0.18, linetype="dashed",
color = "red", size=1)
Bootstrap of Clusters and Visualization:
- 2 CLUSTERS:
kchoice<-2
invisible(capture.output(cboot.hclust <- clusterboot(gower.dist,distances=TRUE,clustermethod=hclustCBI, k=kchoice,method="complete",seed="123456789")))
cboot.hclust$bootmean
## [1] 0.8423149 0.7770231
cboot.hclust$bootbrd
## [1] 0 2
dendro <- as.dendrogram(aggl.clust.c)
dendro.col <- dendro %>%
set("branches_k_color", k = 2, value = c("#2E9FDF","red")) %>%
set("branches_lwd", 1) %>%
set("labels_colors", k = 2, value = c("#2E9FDF","red")) %>%
set("labels_cex", 1)
circlize_dendrogram(dendro.col)
- 3 CLUSTERS:
kchoice<-3
invisible(capture.output(cboot.hclust <- clusterboot(gower.dist,distances=TRUE,clustermethod=hclustCBI, k=kchoice,method="complete",seed="123456789")))
cboot.hclust$bootmean
## [1] 0.8269758 0.8038819 0.4952221
cboot.hclust$bootbrd
## [1] 1 2 57
dendro <- as.dendrogram(aggl.clust.c)
dendro.col <- dendro %>%
set("branches_k_color", k = 3, value = c("#2E9FDF", "red","#E7B800")) %>%
set("branches_lwd", 1) %>%
set("labels_colors", k = 3, value = c("#2E9FDF", "red","#E7B800")) %>%
set("labels_cex", 1)
circlize_dendrogram(dendro.col)
- 4 CLUSTERS:
kchoice<-4
invisible(capture.output(cboot.hclust <- clusterboot(gower.dist,distances=TRUE,clustermethod=hclustCBI, k=kchoice,method="complete",seed="123456789")))
cboot.hclust$bootmean
## [1] 0.8441289 0.8300746 0.5662892 0.4727753
cboot.hclust$bootbrd
## [1] 1 4 49 64
dendro <- as.dendrogram(aggl.clust.c)
dendro.col <- dendro %>%
set("branches_k_color", k = 4, value = c("#2E9FDF","red","#E7B800","darkgreen")) %>%
set("branches_lwd", 1) %>%
set("labels_colors", k = 4, value = c("#2E9FDF","red","#E7B800","darkgreen")) %>%
set("labels_cex", 1)
circlize_dendrogram(dendro.col)
- 9 CLUSTERS:
kchoice<-9
invisible(capture.output(cboot.hclust <- clusterboot(gower.dist,distances=TRUE,clustermethod=hclustCBI, k=kchoice,method="complete",seed="123456789")))
cboot.hclust$bootmean
## [1] 0.6946536 0.5451034 0.5109907 0.6680000 0.7800000 0.6060833 0.4335839
## [8] 0.4879693 0.3661667
cboot.hclust$bootbrd
## [1] 21 53 64 35 28 46 72 67 86
dendro <- as.dendrogram(aggl.clust.c)
dendro.col <- dendro %>%
set("branches_lwd", 1) %>%
set("labels_colors", k = 9, value=c("#2E9FDF","red","#E7B800","darkgreen", "blue","darkorange","black","gray","purple")) %>%
set("labels_cex", 1)
circlize_dendrogram(dendro.col)
We visualize how the attributes are represented by the clustering:
colors_to_use <- as.numeric(xOrig$`Autonomy`)
colors_to_use <- colors_to_use[order.dendrogram(dendro)]
labels_colors(dendro) <- colors_to_use
dendro.list<-as.character(xOrig$`Autonomy`)
circlize_dendrogram(dendro)
colors_to_use <- as.numeric(x$`Emergent Behaviour`)
colors_to_use <- colors_to_use[order.dendrogram(dendro)]
labels_colors(dendro) <- colors_to_use
dendro.list<-as.character(x$`Emergent Behaviour`)
circlize_dendrogram(dendro)
par(mar=c(0,25,0,0),xpd=TRUE)
legend(-1.7,1,
legend = c("No Emergent Behaviour","Emergent Behaviour" ),
col = c(1,2),
pch = c(20,20), bty = "n", pt.cex = 1, cex = 0.7,
text.col = "black", horiz = FALSE)
colors_to_use <- as.numeric(x$`Cooperative(agent level)`)
colors_to_use <- colors_to_use[order.dendrogram(dendro)]
labels_colors(dendro) <- colors_to_use
dendro.list<-as.character(x$`Cooperative(agent level)`)
circlize_dendrogram(dendro)
par(mar=c(0,25,0,0),xpd=TRUE)
legend(-1.7,1,
legend = c("Non cooperative agent","Cooperative agent"),
col = c(1,2),
pch = c(20,20), bty = "n", pt.cex = 1, cex = 0.7,
text.col = "black", horiz = FALSE)
colors_to_use <- as.numeric(x$Behaviour)
colors_to_use <- colors_to_use[order.dendrogram(dendro)]
labels_colors(dendro) <- colors_to_use
dendro.list<-as.character(x$Behaviour)
circlize_dendrogram(dendro)
par(mar=c(0,25,0,0),xpd=NA)
legend(-1.7,1,
legend = c("Altruistic" , "Altruistic (collaborative)", "Altruistic locally/Selfish globally","Both versions explored","Selfish","Selfish but collaborative","Tunable"),
col = c(1,2,3,4,5,6,7),
pch = c(20,20,20,20,20,20,20), bty = "n", pt.cex = 1, cex = 0.7,
text.col = "black", horiz = FALSE)
colors_to_use <- as.numeric(x$`Knowledge Access`)
colors_to_use <- colors_to_use[order.dendrogram(dendro)]
labels_colors(dendro) <- colors_to_use
dendro.list<-as.character(x$`Knowledge Access`)
circlize_dendrogram(dendro)
par(mar=c(0,25,0,0),xpd=NA)
legend(-1.7,1,
legend = c("Limited","Maximal","Minimal", "Neighborhood", "Tunable"),
col = c(1,2,3,4,5,6),
pch = c(20,20,20,20,20), bty = "n", pt.cex = 1, cex = 0.7,
text.col = "black", horiz = FALSE)
colors_to_use <- as.numeric(x$`Trigger-first`)
colors_to_use <- colors_to_use[order.dendrogram(dendro)]
labels_colors(dendro) <- colors_to_use
dendro.list<-as.character(x$`Trigger-first`)
circlize_dendrogram(dendro)
par(mar=c(0,25,0,0),xpd=NA)
legend(-1.7,1,
legend = c("Domain knowledge/humans","From peers and other agents","No initial knowledge (random)","Not mentrioned"),
col = c(1,2,3,4,5,6),
pch = c(20,20,20,20,20), bty = "n", pt.cex = 1, cex = 0.7,
text.col = "black", horiz = FALSE)
colors_to_use <- as.numeric(x$`Trigger-update`)
colors_to_use <- colors_to_use[order.dendrogram(dendro)]
labels_colors(dendro) <- colors_to_use
dendro.list<-as.character(x$`Trigger-update`)
circlize_dendrogram(dendro)
par(mar=c(0,25,0,0),xpd=NA)
legend(-1.7,1,
legend = c("Action","Learning task\nthreshold achieved","Not mentioned","Periodic","Social interaction","Task/Episode"),
col = c(1,2,3,4,5,6),
pch = c(20,20,20,20,20,20), bty = "n", pt.cex = 1, cex = 0.7,
text.col = "black", horiz = FALSE)
colors_to_use <- as.numeric(x$`Technique`)
colors_to_use <- colors_to_use[order.dendrogram(dendro)]
labels_colors(dendro) <- c(8,6,"#E7B800","#E7B800","#E7B800",8,8,8,8,2,8,8,8,8,8,8,8,"darkgreen",7,"darkgreen","#E7B800",8,3,8,8,8,7,8,4,8,8,8,8,8,"purple","#E7B800",8,8,3,3,3,8,8,8,8,8,7,8,8,8,8,5,1,"purple",8,8,3,"#E7B800",7)
dendro.list<-as.character(x$`Technique`)
circlize_dendrogram(dendro)
par(mar=c(0,25,0,0),xpd=NA)
legend(-1.7,1,
legend =c("Applied Logic","Evolutionary Process","Game Theory","Game Theory and RL","Genetic Algorithm and RL","Gradient Descent","Probabilistic","Reinforcement Learning","Statistics","Supervised Learning","Swarm System"),
col = c(1,2,3,4,5,6,7,8,"purple","#E7B800","darkgreen"),
pch = c(20,20,20,20,20,20,20,20,20,20,20), bty = "n", pt.cex = 1, cex = 0.7,
text.col = "black", horiz = FALSE)
colors_to_use <- as.numeric(x$`Application Domain`)
colors_to_use <- colors_to_use[order.dendrogram(dendro)]
labels_colors(dend) <- colors_to_use
dendro.list<-as.character(x$`Application Domain`)
circlize_dendrogram(dendro)
par(mar=c(0,25,0,0),xpd=NA)
legend(-1.7,1,
legend = c("Cooperative Game","CPS","Distributed Task\nAllocation Problem","Market","Network","Other","Traffic"),
col = c(1,2,3,4,5,6,7),
pch = c(20,20,20,20,20,20,20), bty = "n", pt.cex = 1, cex = 0.7,
text.col = "black", horiz = FALSE)
Multiple Correspondence Analysis
MCA identifies new latent dimensions, which are a combination of the original dimensions and hence can explain information not directly observable. We perform MCA to capture interaction between attributes with the aim of validating and further extending the cluster analysis.
MCA derives for each identified dimension: (i) the relative eigenvalue and, (ii) the identified proportion of variance retained (i.e., the amount of variation accounted for by the corresponding principal dimension).
We report the results obtained with the optimistic Benzécri correction.
# Benzécri + Greenacre adjustment
mca.res.bg <- epMCA(x, graphs = FALSE, correction = c("b","g"))
eig.val <- get_eigenvalue(res.mca)
kable(eig.val) %>%
kable_styling(full_width = F, font_size = 10, bootstrap_options = c("striped", "hover", "condensed"))
eigenvalue | variance.percent | cumulative.variance.percent | |
---|---|---|---|
Dim.1 | 0.1016633 | 32.0966572 | 32.09666 |
Dim.2 | 0.0477599 | 15.0785324 | 47.17519 |
Dim.3 | 0.0435705 | 13.7558631 | 60.93105 |
Dim.4 | 0.0302412 | 9.5476127 | 70.47867 |
Dim.5 | 0.0237786 | 7.5072644 | 77.98593 |
Dim.6 | 0.0213798 | 6.7499243 | 84.73585 |
Dim.7 | 0.0131710 | 4.1583027 | 88.89416 |
Dim.8 | 0.0098605 | 3.1131107 | 92.00727 |
Dim.9 | 0.0082914 | 2.6177197 | 94.62499 |
Dim.10 | 0.0073535 | 2.3216133 | 96.94660 |
Dim.11 | 0.0043133 | 1.3617619 | 98.30836 |
Dim.12 | 0.0036845 | 1.1632455 | 99.47161 |
Dim.13 | 0.0009755 | 0.3079905 | 99.77960 |
Dim.14 | 0.0006243 | 0.1970954 | 99.97669 |
Dim.15 | 0.0000738 | 0.0233061 | 100.00000 |
To interpret the results of MCA it is necessary to choose the number of dimensions to retain. We decided, according to the average rule introduced by Lorenzo-Seva et. al., to keep all the dimensions whose variance is greater than 9%. Hence, we keep 4 dimensions.
Contributions of the attributes to PC1 (principal component one):
fviz_contrib(res.mca, choice="var", axes = 1, top =15,
fill = "lightgray", color = "black") +
theme_minimal() +
labs(title = "", x = "") +
theme(text = element_text(size=10), axis.text.x = element_text(angle=70, vjust=1, hjust=1))
Contributions of the attributes to PC2:
fviz_contrib(res.mca, choice="var", axes = 2, top =15,
fill = "lightgray", color = "black") +
theme_minimal() +
labs(title = "", x = "") +
theme(text = element_text(size=10), axis.text.x = element_text(angle=77, vjust=1, hjust=1))
Contributions of the attributes to PC3:
fviz_contrib(res.mca, choice="var", axes = 3, top =15,
fill = "lightgray", color = "black") +
theme_minimal() +
labs(title = "", x = "") +
theme(text = element_text(size=10), axis.text.x = element_text(angle=77, vjust=1, hjust=1))
Contributions of the attributes to PC4:
fviz_contrib(res.mca, choice="var", axes = 4, top =15,
fill = "lightgray", color = "black") +
theme_minimal() +
labs(title = "", x = "") +
theme(text = element_text(size=10), axis.text.x = element_text(angle=77, vjust=1, hjust=1))
We show the biplots:
fviz_mca_ind(res.mca,
legend.title = "Emergent Behaviour",
pointsize = 1.2,
labelsize = 3,
habillage = x$`Emergent Behaviour`,
palette = c("#FC4E07","#00AFBB", "#E7B800", "#FC4E07"),
addEllipses = TRUE,
ellipse.level = 0.95,
axes = c(1, 2)
) + labs(title = "", x = "Dim.1", y ="Dim.2")
fviz_mca_ind(res.mca,
legend.title = "Cooperative (agent level)",
pointsize = 1.2,
labelsize = 3,
habillage = data$`Cooperative (agent level)`,
palette = c("#FC4E07","#00AFBB", "#E7B800", "#FC4E07"),
addEllipses = TRUE,
ellipse.level = 0.95,
axes = c(1, 2)
) + labs(title = "", x = "Dim.1", y ="Dim.2")
fviz_mca_ind(res.mca,
legend.title = "Behaviour",
pointsize = 1.2,
labelsize = 3,
habillage = data$`Behaviour`,
palette = c("#2fc437","#00AFBB","#E7B800","#FC4E07","#e633ff","#FF8000","#8000FF","#0080FF","#FF0080"),
addEllipses = TRUE, # Concentration ellipses
ellipse.level = 0.8,
axes = c(1,2)
) + labs(title = "", x = "Dim.1", y ="Dim.2")
fviz_mca_ind(res.mca,
legend.title = "Trigger - first",
pointsize = 1.2,
labelsize = 3,
habillage = data$`Trigger - first`,
palette = c("#2fc437","#00AFBB","#E7B800","#FC4E07","#e633ff","#FF8000","#8000FF","#0080FF","#FF0080"),
addEllipses = TRUE, # Concentration ellipses
ellipse.level = 0.6,
axes = c(2,3),
) + labs(title = "", x = "Dim.2", y ="Dim.3")
fviz_mca_ind(res.mca,
legend.title = "Trigger - update",
pointsize = 1.2,
labelsize = 3,
habillage = data$`Trigger - update`, # color by groups
palette = c("#2fc437","#00AFBB","#E7B800","#FC4E07","#e633ff","#FF8000","#8000FF","#0080FF","#FF0080"),
addEllipses = TRUE, # Concentration ellipses
ellipse.level = 0.6,
axes = c(2,3)
) + labs(title = "", x = "Dim.2", y ="Dim.3")
fviz_mca_ind(res.mca,
legend.title = "Knowledge Access",
pointsize = 1.2,
labelsize = 3,
habillage = data$`Knowledge Access`,
palette = c("#2fc437","#00AFBB","#E7B800","#FC4E07","#e633ff","#FF8000","#8000FF","#0080FF","#FF0080"),
addEllipses = TRUE, # Concentration ellipses
ellipse.level = 0.6,
axes = c(2,3)
) + labs(title = "", x = "Dim.2", y ="Dim.3")
fviz_mca_ind(res.mca,
legend.title = "Technique",
pointsize = 1.2,
labelsize = 3,
habillage = data$`Technique`,
addEllipses = TRUE,
ellipse.level = 0.5,
axes = c(3,4)
) + labs(title = "", x = "Dim.3", y ="Dim.4")
fviz_mca_ind(res.mca,
legend.title = "Application Domain",
pointsize = 1.2,
labelsize = 3,
habillage = data$`Application Domain`, # color by groups
addEllipses = TRUE, # Concentration ellipses
ellipse.level = 0.6,
axes = c(3,4)
) + labs(title = "", x = "Dim.3", y ="Dim.4")