Deep learning is driving a rapidly-escalating demand for AI computing capabilities. Significant improvements in compute performance and efficiency are required to satisfy this demand, now and in the future. Analog-domain computation is a promising technique to improve deep neural network compute efficiency, enabled by algorithmic innovations in numerical precision reduction and the inherent robustness of deep neural networks. In particular, by performing matrix multiplications in the analog domain, the major DNN compute bottleneck can be significantly reduced for both inference and training. I will describe innovations in materials, heterogeneous integration, and analog accelerator design that are underway in IBM Research to create a path to the realization of analog computing devices for deep-learning acceleration.