Recently, Eon Systems PBC co-founder and founding advisor Dr. Alex Wissner-Gross shared some of the work that we’ve been doing on X, and we were pleasantly surprised at how much attention it’s received. This embodied fly is still very much a work-in-progress, and a first step towards showing how an embodied brain would control a virtual body. We wanted to discuss here how the virtual fly works, and its limitations. This post will be necessarily quite technical.
First, we want to acknowledge how much this project depends on the broader neuroscience community. Our work builds directly on the adult fly connectome (Dorkenwald et al., 2024), on connectome-constrained brain models (Lappalainen et al., 2024), on neuromechanical fly body models (Wang-Chen et al., 2024; Ozdil et al., 2024), and on decades of work mapping sensory circuits, descending neurons, and behavior in Drosophila. The current system is an integration effort, most specifically of existing brain models and existing virtual body models. We’d also like to state that this work was a true team effort conducted by Scott Harris, Aarav Sinha, Viktor Toth, Alexis Pomares, and Philip Shiu.
How does the fly work?
In the video, the fly uses invisible taste cues to navigate the environment towards a food source (stylized as slices of banana). Fictive dust accumulates on the fly, so the fly stops, grooms itself, then continues towards the food, and commences eating.
For the brain, the main starting point is the model from Shiu et al.: a leaky integrate-and-fire (LIF) model built from the adult Drosophila central-brain connectome, with approximately 140,000 neurons and roughly 50 million synaptic connections, using inferred neurotransmitter identities to determine the sign of synapses (Eckstein et al., 2024). That model showed that connectome structure alone can recover substantial sensorimotor structure for behaviors such as feeding and grooming, which is exactly why it is such a useful substrate for embodiment. This model depends on the broader FlyWire effort and the systematically annotated whole-brain resource of 140,000 neurons (Schlegel et al., 2024).
We also use the Lappalainen et al. visual model, a model of the fly visual motion pathway. In that work, the authors built a connectome-constrained recurrent network for 64 visual cell types, spanning tens of thousands of neurons across the visual field, and showed that with connectivity plus task constraints they could predict neural activity across the motion system. Combined with the NeuroMechFly virtual body, this allows us to predict the activity of the visual system; we then “pipe in” that information into the Flywire LIF model.
To embody the brain, we use a published neuromechanical fly body, NeuroMechFly (Wang-Chen et al., 2024), which represents the fly as an anatomically structured articulated body with physically simulated joints, forces, contact, and actuation. It has 87 independent joints embodied in a precise 3D mesh that was created from an X-ray microtomography scan of a biological fruit fly (Wang-Chen et al., 2024). The digital fly runs on the MuJoCo physics engine, which provides high-fidelity, physically-constrained environments for behavioral simulations (Todorov et al., 2012).
NeuroMechFly v2, already implemented sensory inputs, including simulated vision and olfaction, which we have used. Fly walking was implemented using slight modifications to existing NeuroMechFly controllers, trained to imitate the walking behavior of the fly. We also note that the Vaxenburg et al. whole-body model, which we did not use, also showed realistic walking and flight using reinforcement-learned controllers and high-level steering signals.
Conceptually, the full loop has four parts. First, sensory events in the virtual world are mapped onto identified sensory neurons or sensory pathways. Second, brain activity is updated in a connectome-constrained neural model. Third, selected descending outputs are translated into low-dimensional motor commands for the body. Fourth, the resulting movement changes the sensory state, which is fed back into the brain. We currently run the syncing steps between the brain and body every 15 ms, calculate the brain’s response to sensory input, and then simulate the body’s response for 15 ms. We note that this 15 ms time step may be too slow for some behaviors.
Sensory input: how the virtual world enters the brain
Various sensory inputs were fed into the body model. For taste, we can activate gustatory receptor neurons corresponding to appetitive stimuli such as sugar, or aversive bitter neurons (Shiu et al., 2024; Tastekin et al., 2025). In our model, similar to the biological fly, taste inputs on the legs and proboscis, when activated in the NeuroMechFly body, result in activation of taste inputs of the brain. This causes feeding, turning, and slowing near appetitive food (Shiu et al., 2024; Sapkal et al., 2024; Scott, 2018). Olfaction can similarly be implemented by activating the appropriate olfactory receptor neurons.
For touch and grooming, we use antennal mechanosensory pathways. Hampel et al. identified an antennal grooming command circuit in which Johnston’s organ mechanosensory neurons drive a brain circuit culminating in antennal descending neurons that are sufficient to elicit grooming (Seeds et al., 2014; Hampel et al., 2015; Hampel et al., 2020). Our current interface uses that idea directly: “virtual dust” activates antennal mechanosensory neurons, which then recruit descending signals associated with antennal grooming.
Connectome-controlled vision (i.e., modeling of the visual system with the Lappalainen model) is already implemented in the NeuroMechFly model. We determine the predicted activations of the visual system neurons, and “pipe in” activations of these neurons into the corresponding neurons in our LIF model. Currently, these activations are somewhat “decorative” in that they do not currently substantially influence our behavioral outputs, but we are working to further implement this, and note that activations of, for example, looming stimulus neurons in the LIF model activate descending neurons involved in escape.
Descending neuron control: how brain activity controls the body
The fly body is not currently driven by the full downstream motor hierarchy of the biological fly. Instead, we use a small number of descending outputs as a practical interface between the connectome model and the biomechanics. In the fly, specific descending neurons are known to be involved in particular behaviors (Simpson, 2024).
Activation of these specific descending neurons serve to influence the controllers of the body, which have been trained by imitation learning to mimic particular fly behaviors. For example, in our model, antennal grooming is driven through antennal descending neurons (Seeds et al., 2014; Hampel et al., 2015; Hampel et al., 2020). Steering in our model is driven through the neurons DNa01 and DNa02 (Yang et al., 2024), which are implicated in turning. Forward velocity is modeled by activation of oDN1 (Sapkal et al., 2024). Feeding is modeled by activation of proboscis motor neurons, specifically MN9. We note that many DNs have validated, behaviorally meaningful roles, but also that these neurons operate in networks rather than as isolated one-neuron-to-one-behavior buttons. Braun et al. showed that command-like descending signals recruit broader descending populations; there are over 1,000 descending neurons. So the present controller is best thought of as a deliberately low-dimensional readout layer from a much richer neural system.
A useful analogy is driving a car. If you know the state of the steering wheel, accelerator, and brake, you can predict a lot about what the car will do without explicitly simulating every combustion event inside the engine. Our use of descending neurons is similar. We currently treat a small set of descending signals as control handles for e.g., turning, forward velocity, escape, backward walking (escape and backward walking not shown in our demo), and grooming, and then let lower-level controllers convert those signals into joint torques, leg trajectories, or other body-level actuation.
Behaviors
Among others, we have examined a few particular behaviors:
Grooming: Flies perform grooming behaviors to clean their body parts. One grooming behavior is antennal grooming (which has already been simulated in the unembodied Shiu et al, 2024 brain model) – the fly uses its forelegs to brush its antennae when they become dusty or stimulated.
Feeding: Drosophila exhibit a stereotyped feeding response when they taste sugar; again, this has already been simulated in the unembodied Shiu et al, 2024 brain model. To simulate feeding, we will place a virtual “food source” (e.g., a drop of sugar solution) in the environment and allow the fly’s proboscis or leg to contact it. In the model, contact with sugar will be sensed by gustatory neurons on the legs or proboscis, which send signals to the brain’s feeding circuit. If the stimulus is appetitive (e.g., sweet), the brain model activates the “motor program” for feeding.
Foraging: Foraging is a more complex, goal-directed behavior where the fly explores its environment to find food. In the simulation, we set up an arena with one or more odor or taste sources representing food. The fly’s task is to wander until it detects the food cue, then navigate toward it.
Fleeing from threatening visual stimuli: Flies have an innate escape response to looming threats (such as an approaching predator or a sudden shadow overhead). In our simulation, we can replicate a looming stimulus – for example, a dark object rapidly expanding in the fly’s visual field – and observe the model’s response. Activation of neurons involved in looming activate neurons that elicit escape in our unembodied model, but we have not yet implemented escape behavior in the body.
What the current embodied fly is not
Our work is an integration of previously published components; we note some of the limitations of our work here. Importantly, we only implement a small subset of sensory inputs and model only a handful of behaviors.
First, the Shiu et al. model is a simplified neuron model. It uses leaky integrate-and-fire dynamics rather than morphologically detailed multicompartment neurons, and it relies on inferred neurotransmitter identity and simplified synapse models. This means that dendritic nonlinearities, biophysical channel diversity, and many specific dynamics are not represented. This is enough to recover some sensorimotor transformations, but clearly does not capture the full range of neural activity. Further, internal state, plasticity, learning, hormonal changes are largely missing. Biological flies do not respond to the same sensory input the same way in all contexts. Hunger, satiety, arousal, mating state, egg-laying state, recent sensory history, neuromodulators, and learning all reshape sensorimotor transformations.
Next, brain-body coupling remains a challenging engineering and scientific problem in the whole stack. A central difficulty is not merely running a neural simulation and a physics simulation side by side; it is deciding how firing rates or spikes in specific descending neurons should map onto changes in torques, joint trajectories, posture changes, or coordinated sequences of leg movements. At what rate should a particular sensory stimuli activate specific sensory neurons, and how much should a particular descending neuron activity influence, for example, turning speed? These mappings can be somewhat arbitrarily chosen by hand (as is our case), or learned with reinforcement learning, or mediated by lower-level controllers, but in all cases it is still an approximation of the true motor hierarchy. One solution towards this might be more imaging or electrophysiology to understand the specific transformation between DN firing rate and specific behavior.
Additionally, our current descending-neuron interface is quite sparse. DNa01, DNa02, aDN1, oDN1, giant fiber, proboscis motor neurons, and a few others we have experimented with are involved with a variety of behaviors, but they do not span the full repertoire of fly descending neuron behavioral control. Recent work shows that descending neurons are numerous, partially redundant, hierarchical, and population-based. Some are “broadcasters” that recruit other DNs; others contribute specialized components of steering, grooming, flight, or reproductive behavior (Braun et al., 2024). That means our current controller can produce recognizable behaviors, but it almost certainly does so through a much lower-dimensional control interface than the biological fly uses. An interesting use of our and other embodied models may be to predict, given some sensory input, the role of particular descending neurons, given when they are predicted to be active. Pugliese et al. predict the role of particular descending neurons from a computational activation screen (Pugliese et al., 2025). We also note that extending our model to include the VNC and other outputs is another useful direction.
Finally, our results should not yet be interpreted as a proof that structure alone is sufficient to recover the entire behavioral repertoire of the fly in a scientifically rigorous way. Pure structure-to-behavior is the direction we want to explore, but for a broad embodied repertoire it will likely require additional learning, additional priors, more detailed motor interfaces, and more functional data. In that sense, the current embodied fly is best understood as a research platform and a demonstration platform.
How you can help
Eon is currently working on drafting a framework for specifying the fidelity of brain emulations and uploads, which we expect to share in the medium-term future, and will be soliciting input about these documents. Additionally, we hope to work with academic and industry groups on this challenge. If you’re interested in collaborating on these, please reach out at contact@eon.systems
Conclusion
We view the embodied fly as an important first step. It is not the endpoint, and the model currently has many significant simplifications. But it may be a useful testbed for connectome-constrained sensorimotor control, for evaluating candidate brain-body interfaces, and for making the problem of embodied emulation concrete enough to improve. We also hope that our demonstration serves as an interesting and exciting demonstration of how brain model embodiment might work.
Citations
Braun, J., Hurtak, F., Wang-Chen, S., et al. (2024). Descending networks transform command signals into population motor control. Nature, 630(8017), 686–694. https://doi.org/10.1038/s41586-024-07523-9
Dorkenwald, S., et al. (2024). Neuronal wiring diagram of an adult brain. Nature, 634(8032), 124–138. https://doi.org/10.1038/s41586-024-07558-y
Eckstein, N., et al. (2024). Neurotransmitter classification from electron microscopy images at synaptic sites in Drosophila melanogaster. Cell, 187(10), 2574–2594.e23. https://doi.org/10.1016/j.cell.2024.03.016
Hampel, S., Franconville, R., Simpson, J. H., & Seeds, A. M. (2015). A neural command circuit for grooming movement control. eLife, 4, e08758. https://doi.org/10.7554/eLife.08758
Hampel, S., Eichler, K., Yamada, D., Bock, D. D., Kamikouchi, A., & Seeds, A. M. (2020). Distinct subpopulations of mechanosensory chordotonal organ neurons elicit grooming of the fruit fly antennae. eLife, 9, e59976. https://doi.org/10.7554/eLife.59976
Lappalainen, J. K., et al. (2024). Connectome-constrained networks predict neural activity across the fly visual system. Nature, 634(8036), 1132–1140. https://doi.org/10.1038/s41586-024-07939-3
Özdil, P. G., Arreguit, J., Scherrer, C., Ijspeert, A., & Ramdya, P. (2024). Centralized brain networks underlie body part coordination during grooming [Preprint]. bioRxiv. https://doi.org/10.1101/2024.12.17.628844
Sapkal, N., et al. (2024). Neural circuit mechanisms underlying context-specific halting in Drosophila. Nature, 634(8032), 191–200. https://doi.org/10.1038/s41586-024-07854-7
Schlegel, P., et al. (2024). Whole-brain annotation and multi-connectome cell typing of Drosophila. Nature, 634(8032), 139–152. https://doi.org/10.1038/s41586-024-07686-5
Scott, K. (2018). Gustatory processing in Drosophila melanogaster. Annual Review of Entomology, 63, 15–30. https://doi.org/10.1146/annurev-ento-020117-043331
Shiu, P. K., et al. (2024). A Drosophila computational brain model reveals sensorimotor processing. Nature, 634(8032), 210–219. https://doi.org/10.1038/s41586-024-07763-9
Simpson, J. H. (2024). Descending control of motor sequences in Drosophila. Current Opinion in Neurobiology, 84, 102822. https://doi.org/10.1016/j.conb.2023.102822
Tastekin, I., et al. (2025). From sensory detection to motor action: The comprehensive Drosophila taste-feeding connectome [Preprint]. bioRxiv. https://doi.org/10.1101/2025.08.25.671814
Todorov, E., Erez, T., & Tassa, Y. (2012). MuJoCo: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 5026–5033). IEEE. https://doi.org/10.1109/IROS.2012.6386109
Vaxenburg, R., et al. (2025). Whole-body physics simulation of fruit fly locomotion. Nature, 643(8074), 1312–1320. https://doi.org/10.1038/s41586-025-09029-4
Wang-Chen, S., Stimpfling, V. A., Lam, T. K. C., Özdil, P. G., Genoud, L., Hurtak, F., & Ramdya, P. (2024). NeuroMechFly v2: Simulating embodied sensorimotor control in adult Drosophila. Nature Methods, 21(12), 2353–2362. https://doi.org/10.1038/s41592-024-02497-y
Yang, H. H., et al. (2024). Fine-grained descending control of steering in walking Drosophila. Cell, 187(22), 6290–6308.e27. https://doi.org/10.1016/j.cell.2024.08.033