Matthew S. Evanusa

PhD · Lead ML Engineer · AI Researcher

“True agency emerges from stateful memory coupled networks, not functions.”

Research Philosophy

Dynamical Systems & Intelligence

Intelligence is not a static mapping from inputs to outputs. It emerges from the continuous evolution of internal state — adaptive, self-modifying dynamical systems that maintain coherent world models through time. My research treats neural networks as dynamical systems first, function approximators second.

Memory–Computation Separation

Traditional RNNs conflate memory storage with computation, limiting their capacity for long-horizon temporal reasoning. Maelstrom Networks address this by topologically separating a persistent memory substrate from a feed-forward readout mechanism, enabling continual learning without catastrophic forgetting.

Cognitive Architectures

Building machines that think requires more than scaling parameters. It demands architectures informed by cognitive science — persistent working memory, anticipatory cognition, self-referential state loops, and the temporal binding that gives rise to coherent experience. These are the building blocks of general intelligence.

From Research to Deployment

Theory without implementation is speculation. My work spans the full pipeline: from novel architecture design and mathematical analysis through prototype implementation to production deployment at scale. The gap between a research paper and a system that works reliably in the real world is where the real engineering happens.

Memory-Augmented Neural Networks
Temporal Neural Networks
Agentic AI & Tool Use
Continual Learning
Reservoir Computing
Spiking Neural Networks
Neuromorphic Computing
Embodied AI
Cognitive Architectures
Dynamical Systems
Online Learning
Representation Learning

Publications & Research

13+
Publications
ICLR
Spotlight Paper
2
Patents Filed

t-ConvESN: Temporal Convolution-Readout for Random Recurrent Neural Networks

Evanusa, M. S., Patil, V., Girvan, M., Goodman, J., Fermüller, C., & Aloimonos, Y.

ICANN · 2023Patent Filed

ProtoVAE: Prototypical Networks for Unsupervised Disentanglement

Patil, V., Evanusa, M., & JaJa, J.

arXiv:2305.09092 · 2023

Deep-Readout Random Recurrent Neural Networks for Real-World Temporal Data

Evanusa, M., Shrestha, S., Patil, V., Fermüller, C., Girvan, M., & Aloimonos, Y.

SN Computer Science · 2022

DoT-VAE: Disentangling One Factor at a Time

Patil, V., Evanusa, M., & JaJa, J.

ICANN · 2022

SpikeMS: Deep Spiking Neural Network for Motion Segmentation

Parameshwara, C. M., Li, S., Fermüller, C., Sanket, N. J., Evanusa, M. S., & Aloimonos, Y.

IEEE IROS · 2021

Deep Reservoir Networks with Learned Hidden Reservoir Weights Using Direct Feedback Alignment

Evanusa, M., Fermüller, C., & Aloimonos, Y.

ArXiv · 2020

A Deep 2-Dimensional Dynamical Spiking Neuronal Network for Temporal Encoding Trained with STDP

Evanusa, M., Fermüller, C., & Aloimonos, Y.

arXiv:2009.00581 · 2020

Network Deconvolution

Ye, C., Evanusa, M., He, H., Mitrokhin, A., Goldstein, T., Yorke, J. A., ... & Aloimonos, Y.

ICLR · 2019Spotlight

Event-Based Attention and Tracking on Neuromorphic Hardware

Evanusa, M., & Sandamirskaya, Y.

IEEE/CVF CVPR Workshops · 2019

Learning Spatial Models for Navigation

Epstein, S. L., Aroor, A., Evanusa, M., Sklar, E. I., & Parsons, S.

COSIT · 2015

Experience

2025 – Present

Lead Machine Learning Engineer

Sylogic · San Jose, CA

  • Architected multi-agent orchestration system with chain-of-thought reasoning pipelines for complex code generation tasks
  • Deployed autonomous coding agents to enterprise clients, delivering 40+ automated pull requests with production-quality code
  • Designed MongoDB-backed RAG system for persistent agent memory and contextual reasoning
  • Led successful technical delivery for Fortune 500-scale enterprise clients under compressed timelines
40+ Production PRsFortune 500 DeliveryMulti-Agent Systems
2021 – 2025

Researcher

US Naval Research Laboratory · Signals Division TEWD

  • Development and testing of novel temporal neural networks for real-world, deployable signal ML tasks for the US Navy
  • Deep model hyperparameter optimization for production-grade deployment
  • Filed patent for t-ConvESN temporal convolution-readout architecture
Patent FiledProduction Deployment
2015 – 2024

PhD Researcher

University of Maryland, College Park · PRG Lab, Dept. of Computer Science

  • Developed Maelstrom Networks — a hybrid recurrent architecture separating memory from computation for embodied AI and continual learning
  • Co-authored ICLR Spotlight paper on Network Deconvolution (2019)
  • Awarded Intel Best Project at Telluride Neuromorphic Workshop for spiking neural network work on Intel Loihi chip
  • Published 11+ papers across ICLR, IROS, ICANN, CVPR Workshops, and journals
ICLR SpotlightIntel Best ProjectPatent Filed11+ Publications
2014 – 2015

Post-Bac Researcher

Epstein Lab, CUNY Hunter College · New York, NY

  • Researched cognitively inspired algorithms for robot navigation
  • Published at COSIT 2015 on spatial learning models