Deistler, MichaelMichaelDeistlerKadhim, Kyra L.Kyra L.KadhimPals, MatthijsMatthijsPalsBeck, JonasJonasBeckHuang, ZiweiZiweiHuangGloeckler, ManuelManuelGloecklerLappalainen, Janne K.Janne K.LappalainenSchroeder, CorneliusCorneliusSchroederBerens, PhilippPhilippBerensGoncalves, PedroPedroGoncalvesMacke, Jakob H.Jakob H.Macke2026-01-262026-01-2620251548-7091https://imec-publications.be/handle/20.500.12860/58736Biophysical neuron models provide insights into cellular mechanisms underlying neural computations. A central challenge has been to identify parameters of detailed biophysical models such that they match physiological measurements or perform computational tasks. Here we describe a framework for simulating biophysical models in neuroscience—Jaxley—which addresses this challenge. By making use of automatic differentiation and GPU acceleration, Jaxley enables optimizing large-scale biophysical models with gradient descent. Jaxley can learn biophysical neuron models to match voltage or two-photon calcium recordings, sometimes orders of magnitude more efficiently than previous methods. Jaxley also makes it possible to train biophysical neuron models to perform computational tasks. We train a recurrent neural network to perform working memory tasks, and a network of morphologically detailed neurons with 100,000 parameters to solve a computer vision task. Jaxley improves the ability to build large-scale data- or task-constrained biophysical models, creating opportunities for investigating the mechanisms underlying neural computations across multiple scales.engJaxley: differentiable simulation enables large-scale training of detailed biophysical models of neural dynamicsJournal article10.1038/s41592-025-02895-wWOS:001613790600001MEDLINE:41233544