Jennifer Sleeman

Jennifer Sleeman

Jennifer Sleeman

University of Maryland Baltimore County

Research Assistant Professor

Dr. Jennifer Sleeman is a Research Assistant Professor in Computer Science at the University of Maryland, Baltimore County (UMBC). She defended her Ph.D. thesis, Dynamic Data Assimilation for Topic Modeling (DDATM) in 2017 under Tim Finin and Milton Halem. Her research interests include generative models, natural language processing, semantic representation, and deep learning. Her early work included entity disambiguation and coreference resolution, which led to fine-grained entity type identification, published in AI Magazine in 2015. Following her novel contribution of generative topic modeling for graphs, she used generative topic models to discover cross-domain influence and relatedness among scientific research papers. Her Ph.D. thesis adapted the theory of data assimilation, which is based on theoretical approaches for temporal integration of physical observations with dynamic simulation models, and applied it to a large, multi-sourced, scientific text collection. She performed a data assimilation of the Intergovernmental Panel for Climate Change (IPCC) reports across 30 years, using an initial model and integrating research documents to produce subsequent models over time. This methodology provides a new approach for multi-source data integration and trend prediction that uses an innovative method for filtering noise and accounting for missing model data. Her work was awarded a Microsoft “AI for Earth” resource grant in 2017 and 2018 and also won the best paper award in the Semantic Web for Social Good Workshop presented at International Semantic Web Conference in 2018. She is also an active research scientist in generative deep learning methods for which a patent is pending.

Research Abstract:

Deep learning has had a profound impact on image recognition and natural language processing. Recent neural research related to generating text from images and images from text has shown the promise of an intersection between these two domains. However, there are still issues related to accurately capturing the visual and textual details when translating from one modality to the other and there is still a need to understand how to use each of these modalities to enrich/supplement the other. State of the art text and image generation approaches are described as generative models that learn latent representations and model reconstruction. My research includes the application of generative models and understanding learned latent representations for natural language and image-based problems. We have shown the usefulness of generative topic models for identifying relatedness/influence among scientific documents grounded by ontological domain concepts. Our work includes generative models for assimilating multi-sourced text into an existing model adapting the theory of data assimilation for natural language. Our method, Dynamic Data Assimilation for Topic Modeling (DDATM), combines temporal latent topics derived from historical time varying documents with multi-domain cited text documents for improved model inference. Ongoing efforts related to image generation involve the use of Quantum Restricted Boltzmann Machines. We compare the quantum learned distribution with the classical learned distribution, and quantify the quantum effects on latent representations. Our future work explores generative learning as it relates to creating a shared common representation among images and text for cross-modal generation and understanding.