Project data description: Using a tool called condensa I can prune values from a DNN model and retrain to recover accuracy resulting in a sparsely populated set of weights achieving similar accuracy. The resulting sparse model contains significantly less (<1%) data, which presents an opportunity for a sparse implementation which only computes the result for nonzero elements of the model. Unfortunately, such an implementation fails to leverage modern devices, which have been tailored to efficiently execute large, dense, highly data parallel operations. The long term vision of my project is to analyze the pros and cons of sparse implementations of such operations, and identify opportunities to exploit locally dense clusters of data in the model. Viz Goal and target audience: I will be visualizing sparse weights in a single convolution layer of a DNN. The visualization will help guide the sparse implementation of the convolution operation, and will eventually accompany the performance statistics, as well as architectural implications of the implementation on various hardware. The data will be a sparse 4-d matrix. Since I am interested in the data at the filter level, it will be represented as a 2d matrix of 3x3 filters. I will be visualizing local sparsity (at the filter level) as well as global sparsity. Possible other areas of interest may be visualizing the filters as vectors alongside density information to inform a cache prefetcher, or to visualize the density implications of altering the pruning value. The target audience will be the reading committee of my masters project, who will be professors with a background in computer architecture and an insight into performance critical, concurrent software.