Presented by Graham Annett – Computer Science emphasis
Location: City Center Plaza Conference Room 368
This proposal focuses on training multi-task multi-modal agents using Semi-Supervised Learning (SSL) and Reinforcement Learning (RL) to develop deep learning models that are more capable than models that can only be used for language-based tasks. We aim to address the limitations of current deep learning models that are predominantly language-focused, known as Language Models (LLMs), to enable them to be generalized to a wider variety of problem domains.
We identify research questions related to training multi-task multi-modal models, such as the generation of batches from all datasets, the use of in-context examples, and the combination of multi-modal models with language models to create systems that act as agents. Our proposed approach has the potential to significantly contribute to the field of deep learning by enabling models to perform complex tasks beyond language-based tasks.
Dr. Tim Andersen (Chair), Dr. Hoda Mehrpouyan, Dr. Grady Wright, and Dr. Casey Kennington