A brand new technical paper titled “Computing high-degree polynomial gradients in reminiscence” was revealed by researchers at UCSB, HP Labs, Forschungszentrum Juelich GmbH, and RWTH Aachen College.
Summary
“Specialised operate gradient computing {hardware} might drastically enhance the efficiency of state-of-the-art optimization algorithms. Prior work on such {hardware}, carried out within the context of Ising Machines and associated ideas, is proscribed to quadratic polynomials and never scalable to generally used higher-order features. Right here, we suggest an method for massively parallel gradient calculations of high-degree polynomials, which is conducive to environment friendly mixed-signal in-memory computing circuit implementations and whose space scales proportionally with the product of the variety of variables and phrases within the operate and, most significantly, impartial of its diploma. Two flavors of such an method are proposed. The primary is proscribed to binary-variable polynomials typical in combinatorial optimization issues, whereas the second kind is broader at the price of a extra advanced periphery. To validate the previous method, we experimentally demonstrated fixing a small-scale third-order Boolean satisfiability downside based mostly on built-in metal-oxide memristor crossbar circuits, with aggressive heuristics algorithm. Simulation outcomes for larger-scale, extra sensible issues present orders of magnitude enhancements in space, pace and power effectivity in comparison with the state-of-the-art. We focus on how our work might allow even higher-performance programs after co-designing algorithms to use massively parallel gradient computation.”
Discover the technical paper here. Printed September 2024.
Bhattacharya, T., Hutchinson, G.H., Pedretti, G. et al. Computing high-degree polynomial gradients in reminiscence. Nat Commun 15, 8211 (2024). https://doi.org/10.1038/s41467-024-52488-y. Creative commons license.