About
I am an incoming MSc student in Computer Science at the University of Victoria, supervised by Dr. Ashery Mblinyi. My research focuses on continual learning for medical image analysis—developing systems that adapt to new data while mitigating catastrophic forgetting.
I explore continual learning, memory-efficient architectures, andlow-precision computation for resource-constrained deployment. My background includes work on neural network quantization and efficient model design.
Research Interests
Continual and Lifelong Learning
Investigating mechanisms that enable neural networks to acquire new knowledge progressively while preserving previously learned representations. Central to this work is understanding and mitigating catastrophic forgetting in deep learning systems.
Research questions: Memory consolidation mechanisms in artificial neural networks; task interference and knowledge transfer in sequential learning; scalable continual learning for real-world deployment; theoretical foundations of forgetting and retention.
Memory-Efficient Neural Architectures
Developing neural network architectures optimized for resource-constrained environments through model compression, quantization, and novel parameterization strategies. This work addresses the fundamental tension between model capacity and computational efficiency.
Technical focus: Hypernetwork-based optimization for ternary networks; low-precision arithmetic in deep learning systems; parameter-efficient fine-tuning methodologies; hardware-aware neural architecture design.
Edge AI and Resource-Constrained Deployment
Investigating deployment strategies for neural networks in resource-constrained environments, with emphasis on mobile and IoT devices. This research addresses the practical challenges of bringing advanced AI capabilities to edge computing scenarios.
Research directions: On-device learning and inference optimization; hardware-aware neural architecture design; energy-efficient computing for AI workloads; real-time model adaptation at the edge.
