In recent years, it has become popular to analyse relational data by organizing data samples as vertexes of graphs with edges capturing their relations. A widespread tool to analyse graphs -especially when these describe large scale data- is graph signal processing. This theory generalizes discrete signal processing with the introduction of graph filters, which shift information between neighboring vertices as an equivalent of spectral filtering on the eigenvalue space. Unfortunately, graph filters provided by graph management and analysis systems can not typically process big data beyond the capacity of random access memory (RAM) of computing infrastructure. Thus, their usage is restricted by the number of edges they can simultaneously process. This thesis explores promising techniques that perform graph signal processing when the underlying infrastructure can hold in memory only the vertices but not the edges of graphs (vertices are typically hundreds of times fewer in number). In particular, it pro poses that edges can be stored in permanent storage, such as hard drives, and explores two ways of organizing them for efficient traversal during graph filter computation, the first stores all edges in one file per graph, whereas the second stores edges in one file per related vertex. These approaches are implemented in a large graph management and filtering system written in the Java programming language, which runs in local machines with potential memory limitations. Experimental comparison with equiva lent implementations that maintain all edges in the random access memory show that the proposed techniques facilitate graph signal processing even when graphs exceed memory capacity. Finally, an amortized and wall clock analysis of graph management method run times recommends different techniques for different prospective usage of the developed system.