Due to the slowing down of Moore's law, domain-specific accelerators are the only way forward to decrease the energy consumption of desired operations and stay in the power budget. Here, AWS cloud FPGAs were utilized to accelerate applications in the deep learning and optimization field by mapping algorithm to hardware designs with high energy efficiency.
We demonstrated by experiment and with simulation how a probabilistic circuit can learn different functionalities. We observed that non-ideal characteristics in the stochastic magnetic tunnel junctions can be countered by hardware-aware in-situ learning. This approach should make it possible to scale p-computers up to GB scales by adjusting the connectivity to the custom hardware characteristics.
Novel beyond-Moore approaches are needed to increase computing performance. Probabilistic computing based on spintronic hardware makes it possible to efficiently process uncertainties inherent in data or leverage randomness to interpret, infer, and make better decisions faster.
We discovered that specially designed nanomagnets can change states extremely quickly, in less than a billionth of a second, even at room temperatures. By analyzing their behavior, we showed how these rapid changes can be utilized. These fast fluctuations are useful for probabilistic computing applications and hardware accelerators for Monte Carlo simulations, which could lead to significant advancements. Codes for numerical simulations can be found here. The prediction was later verified experimentally by two independent groups at IBM and Tohoku University.
Memristive devices play a vital role in fields like in-memory computing and building neuromorphic systems. We used Kinetic Monte Carlo simulations to understand how these devices form tiny conductive filaments at the nanoscale. This approach allowed us to explain experimental observations of HfO2 memristors, providing valuable insights into their behavior and potential for future technology.
Heat management is a critical challenge in the development of smaller and more powerful computing chips. In our study, we explored how heat moves at the nanoscale, which is crucial for improving the performance and reliability of electronic devices. Using McKelvey-Shockley equations, we discovered that traditional heat flow models can be adapted to work across different heat transfer conditions. Our findings help bridge the gap between theory and practical applications, providing valuable insights for designing future generations of computing technology.