AI computational workloads require access to and processing of large amounts of data for both training and inferencing applications. Memory management has also become increasingly important in this regard. Today’s state of the art AI data center accelerator hardware configuration is comprised of a Graphical Processor Unit (GPU) chip interconnected with one or more High Density Bandwidth (HBM) chips through a silicon interposer attached to an organic laminate substrate. As AI chips evolve and new core architectures emerge, will this hardware configuration be sufficient to deliver performance improvements enabled by these new chip designs? What new technologies will be required to enable future system performance gains? IBM has embarked on a bold, diverse research agenda to address these questions. A key element of this research agenda focuses on heterogeneous integration technology and what role HI will play as well as what HI architecture options are likely to provide the most benefit to addressing the bandwidth challenges associated with AI workloads.