The brain is the ultimate meta-material. Unlike typical computers, which process information sequentially through a central processor, the brain’s 100 billion neurons working in parallel give it incredible advantages in speed, efficiency, and robustness. Neuromorphic computing is a field focused on developing brain-like computational hardware to improve speed and efficiency, often specifically for machine learning applications. Typically, these efforts involve traditional top-down algorithms like gradient descent, but with parallelized or in-memory computations, or an explicit imitation of the brain. Here I’ll discuss an alternative approach, an analog electronic network made of copies of a single self-adjusting element, that can be trained to perform a variety of nontrivial tasks including nonlinear regression and classification. When boundary conditions that correspond to training data are applied to the system, it responds by evolving its internal state (the conductance of edges of the network), stopping only when the applied input voltages naturally produce the desired outputs, that is, a zero-error solution. Because this evolution is determined locally by each element, the system requires no external memory or processing – it trains itself. Learning materials have the potential to be extremely compact, energy-efficient, and fast computational devices, and to reduce electronic waste. Furthermore, they are fascinating active-matter systems in their own right and have the ability to teach us about ensemble systems that learn without the added complexity of biology.
Sam Dillavou is a postdoctoral fellow at the University of Pennsylvania. His current work focuses on the implementation and understanding of learning as an emergent property, on sudden arrests of granular flows, and on the use of machine learning in experimental science. He did his graduate work at Harvard University, where he studied memory effects and ensemble behaviors in frictional interfaces.
AI/ML @ SUFs Working Group
The Applied AI/ML Seminar Series is presented with the goal of increasing communication and collaboration between scientists at the three facilities: [Advanced Photon Source (APS), Argonne Tandem Linac Accelerator System (ATLAS) and the Center for Nanoscale Materials (CNM)] and providing a resource for both new and experienced AI/ML practitioners at Argonne National Lab. We plan to host a monthly seminar and tutorial series. Please join the Slack workspace or see the group website for scheduling/resources.
Microsoft Teams meeting
Join on your computer, mobile app or room device
Click here to join the meeting
Meeting ID: 225 924 574 733
Or call in (audio only)
+1 630-556-7958,,50836801# United States, Big Rock
Phone Conference ID: 508 368 01#