📄️ Relation Guided Message Passing for Multi-label Classification
As the name implies, multi-label classification entails selecting the correct subset of tags for each instance. Its main difference from multi-class learning is that the class values are not mutually exclusive in multi-label learning and usually no prior dependencies between the labels is provided explicitly. In general, the existing literature treats the relationship between labels as co-occurrences in an undirected manner. Though, there are usually multiple types of dependencies between labels and their strength s, they are not independent from the direction of the specified edge. For instance, the “ship” and “sea” labels have an obvious dependency, but the presence of the former implies the latter much more strongly than vice versa. In this project, we introduce relational graph neural networks to model label dependencies. We consider two types of statistical relationships; pulling and pushing.
📄️ Bag Graph: Multiple Instance Learning using Bayesian Graph Neural Networks
Multiple Instance Learning (MIL) is a weakly supervised learning problem where the aim is to assign labels to sets or bags of instances, as opposed to traditional supervised learning where each instance is assumed to be independent and identically distributed (i.i.d.) and is to be labeled individually.
📄️ Random Graphs for Bayesian Graph Neural Networks
Real-world data is noisy. Graphs are constructed from data -> Observed graphs can contain errors.
📄️ GEEM : Active Learning for Graphs
Some nodes are more informative than others.
📄️ RNN with Particle Flow for Probabilistic Spatio-Temporal Forecasting
Spatio-temporal forecasting has numerous applications in analyzing wireless, traffic, and financial networks.
📄️ Microwave Breast Cancer Detection
Early detection of breast cancer significantly increases the chance of recovery.
📄️ HardCore Generation: Generating Hard UNSAT Problems for Data Augmentation
Efficiently determining the satisfiability of a boolean equation -- known as the SAT problem for brevity -- is crucial in various industrial problems. Recently, the advent of deep learning methods has introduced significant potential for enhancing SAT solving. However, a major barrier to the advancement of this field has been the scarcity of large, realistic datasets. The majority of current public datasets are either randomly generated or extremely limited, containing only a few examples from unrelated problem families. These datasets are inadequate for meaningful training of deep learning methods. In light of this, researchers have started exploring generative techniques to create data that more accurately reflect SAT problems encountered in practical situations. These methods have so far suffered from either the inability to produce challenging SAT problems or time-scalability obstacles. In this paper we address both by identifying and manipulating the key contributors to a problem's ``hardness'', known as cores. Although some previous work has addressed cores, the time costs are unacceptably high due to the expense of traditional heuristic core detection techniques. We introduce a fast core detection procedure that uses a graph neural network. Our empirical results demonstrate that we can efficiently generate problems that remain hard to solve and retain key attributes of the original example problems. We show via experiment that the generated synthetic SAT problems can be used in a data augmentation setting to provide improved prediction of solver runtimes.