My research centers around methodological aspects of Bayesian statistics and its application to large scale complex data. I am particularly focused on developing methodology in a broad range of areas including semi-parametric density regression, shrinkage priors for anisotropic function estimation, variable selection with non-Gaussian errors, massive covariance matrix estimation, surface reconstruction and imaging and modeling shapes of non-Euclidean objects. I enjoy developing methodology that has an immediate motivation and impact to a particular application area, while being broadly applicable and leading to foundational questions. In the Bayes paradigm this often involves developing new classes of flexible prior distributions for densities, conditional densities, functions, sparse vectors, matrices or tensors. It is fascinating to explore the structure of the spaces on which the priors are supported while studying how the posterior concentrates as increasing amounts of data are collected. Studying these spaces becomes more challenging outside of unconstrained Euclidean spaces, such as in studying closed surfaces and other shapes, and when the dimension explodes. While Bayesian hierarchical models offer an unified and coherent framework for structured modeling and inference, two key challenges persist. First, as one moves away from simple parametric models, understanding properties of a posterior distribution poses a stiff challenge. Second, even if the true posterior has desirable properties, sampling from the posterior distribution in large scale problems commonly face scalability issues. This is relevant both for high-dimensional and big data problems. My research aims at addressing these challenges simultaneously, developing new theory to evaluate the associated procedures and developing scalable and highly efficient algorithms for Bayesian computation.