dc.description.abstract | For a long time, the preferred machine learning algorithms for doing graph classification have been kernel based. The reasoning has been that kernels represent an elegant way to handle structured data that cannot be easily represented using numerical vectors or matrices. An important reason for the success of kernel methods, is the ’kernel trick’, which essentially replaces computing the feature representation, with a call to a kernel function, thus saving computation and memory cost. For some of the most successful kernels in the graph domain however, such as graphlets, this is not feasible, and one must compute the entire feature distribution in order to obtain the kernel. We present experimental evidence that using graphlet features presented to different neural networks gives comparable accuracy results to kernelized SVMs. As neural networks are parametric models that scale well with data size and can yield faster predictions than SVMs, our results suggest that they are attractive models for graph classification. Our experiments show that increasing the depth of the network gives a highly significant speedup in convergence, but no effect on accuracy on our datasets. In addition to this, we present an experimental method of using latent node representations from a method called DeepWalk as input to a neural net for graph classification. This method under-performs both kernel based methods and our graphlet based method. Finally we discuss several ways to extend both graphlet based and embedded representation based classification methods. | eng |