In this dissertation, we study two network problems using matrices as our primary analysis tools. First, the limits of treating interference as noise are studied for the canonical two-user symmetric Gaussian interference channel. A two-step approach is proposed for finding approximately optimal input distributions in the high signal-to-noise ratio (SNR) regime. First, approximately and precisely optimal input distributions are found for the Avestimehr-Diggavi-Tse (ADT) linear deterministic model. These distributions are then translated, systematically, into Gaussian models, which we show can achieve the sum capacity to within a finite gap. Next, the problem of clustering for brain networks based on the resting-state fMRI time-series data is studied. Our approach is based on the classical K-means algorithm, using Mahalanobis distance as the distance metric. We first consider the hypothetical case where the ground truth is available, so an optimal distance metric can be learned from it. This naturally motivates an unsupervised clustering algorithm that alternates between clustering and metric learning. The performance of the proposed algorithm is evaluated via computer simulations.

In this dissertation, we study two network problems using matrices as our primary analysis tools. First, the limits of treating interference as noise are studied for the canonical two-user symmetric Gaussian interference channel. A two-step approach is proposed for finding approximately optimal input distributions in the high signal-to-noise ratio (SNR) regime. First, approximately and precisely optimal input distributions are found for the Avestimehr-Diggavi-Tse (ADT) linear deterministic model. These distributions are then translated, systematically, into Gaussian models, which we show can achieve the sum capacity to within a finite gap. Next, the problem of clustering for brain networks based on the resting-state fMRI time-series data is studied. Our approach is based on the classical K-means algorithm, using Mahalanobis distance as the distance metric. We first consider the hypothetical case where the ground truth is available, so an optimal distance metric can be learned from it. This naturally motivates an unsupervised clustering algorithm that alternates between clustering and metric learning. The performance of the proposed algorithm is evaluated via computer simulations.