<?xml version="1.0"?>
<News hasArchived="false" page="1" pageCount="1" pageSize="10" timestamp="Mon, 20 Apr 2026 15:48:46 -0400" url="https://dev.my.umbc.edu/groups/umbc-ai/posts.xml?tag=vision">
  <NewsItem contentIssues="true" id="147361" important="false" status="posted" url="https://dev.my.umbc.edu/groups/umbc-ai/posts/147361">
  <Title>Talk: Seeing Beneath the Surface: Vision-Enabled Robots for Long-term Ocean Monitoring</Title>
  <Tagline>4:00&#8211;5:15pm ET Wed, Feb. 19, 2025, UMBC ITE 231 &amp; online</Tagline>
  <Body>
    <![CDATA[
    <div class="html-content"><h3><span>Seeing Beneath the Surface: Vision-Enabled Robots for Long-term Ocean Monitoring</span></h3><div><span><br></span></div><div><h4><a href="https://xiaominlin.github.io/" rel="nofollow external" class="bo"><strong>Xiaomin Lin</strong></a>, JHU</h4></div><h4>4–5:15pm ET Wed, Feb. 19, 2025, ITE 231, UMBC &amp; <a href="https://umbc.webex.com/meet/gokhale" rel="nofollow external" class="bo">online</a></h4><div><br></div><div>Autonomous systems operating in complex and unstructured environments, especially underwater, require robust perception, adaptive navigation, and intelligent reasoning to function effectively. However, traditional AI models often struggle in these settings due to sensory limitations, dynamic obstacles, and computational constraints. This talk highlights these challenges and presents emerging technologies in subsea sensing and low-power autonomous operation. The first part of the talk explores <strong>multimodal sensing</strong>, demonstrating how optical, acoustic, and fused modalities enhance perception in low-visibility environments. The second part introduces <strong><a href="https://en.wikipedia.org/wiki/Active_perception" rel="nofollow external" class="bo">active perception</a></strong>, where robots dynamically select the most informative viewpoints to optimize navigation and exploration. Finally, the third part discusses efficient reasoning, showcasing how compact language models enable real-time decision-making for autonomous exploration and task execution. By integrating these three pillars, this research advances the next generation of intelligent autonomous systems for underwater robotics, environmental monitoring, and beyond.</div><div><br></div><div>Dr. <a href="https://xiaominlin.github.io/" rel="nofollow external" class="bo"><strong>Xiaomin Lin</strong></a> is a Postdoctoral Researcher at Johns Hopkins University, working at the intersection of AI, robotics, and edge computing. He received his Ph.D. in Electrical and Computer Engineering from the University of Maryland, College Park, where his dissertation focused on simulation-driven learning for autonomous underwater systems. His research spans perception-driven autonomy, multi-modal sensing, and efficient AI deployment on edge devices. His work has been recognized with the Best Paper Award at IROS 2024 (Autonomous Robotic Systems in Aquaculture) and the Best Poster Award at the Maryland Robotics Center Symposium. Dr. Lin's research has been funded by USDA, ONR, and AFRL, and he actively collaborates with academia and industry to push the boundaries of subsea autonomy.</div>
    <hr><a href="https://ai.umbc.edu/" rel="nofollow external" class="bo"><strong>UMBC Center for AI</strong></a></div>
]]>
  </Body>
  <Summary>Seeing Beneath the Surface: Vision-Enabled Robots for Long-term Ocean Monitoring      Xiaomin Lin, JHU   4–5:15pm ET Wed, Feb. 19, 2025, ITE 231, UMBC &amp; online     Autonomous systems operating...</Summary>
  <Website>https://www.tejasgokhale.com/seminar.html</Website>
  <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/147361/guest@my.umbc.edu/0103990caf9e49b065d6af5a880dfbad/api/pixel</TrackingUrl>
  <Tag>active-perception</Tag>
  <Tag>ai</Tag>
  <Tag>computer-vision</Tag>
  <Tag>multimodal</Tag>
  <Tag>robot</Tag>
  <Tag>robotics</Tag>
  <Tag>talk</Tag>
  <Tag>vision</Tag>
  <Group token="umbc-ai">UMBC AI</Group>
  <GroupUrl>https://dev.my.umbc.edu/groups/umbc-ai</GroupUrl>
  <AvatarUrl>https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
  <AvatarUrl size="original">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/original.png?1691095779</AvatarUrl>
  <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxlarge.png?1691095779</AvatarUrl>
  <AvatarUrl size="xlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xlarge.png?1691095779</AvatarUrl>
  <AvatarUrl size="large">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/large.png?1691095779</AvatarUrl>
  <AvatarUrl size="medium">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/medium.png?1691095779</AvatarUrl>
  <AvatarUrl size="small">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/small.png?1691095779</AvatarUrl>
  <AvatarUrl size="xsmall">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
  <AvatarUrl size="xxsmall">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxsmall.png?1691095779</AvatarUrl>
  <Sponsor>Advances in Perception, Prediction, and Reasoning Lab</Sponsor>
  <ThumbnailUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/thumbnails/news/000/147/361/e9801948a3571637871183d7091368ae/xxlarge.jpg?1739638480</ThumbnailUrl>
  <ThumbnailUrl size="xlarge">https://assets2-dev.my.umbc.edu/system/shared/thumbnails/news/000/147/361/e9801948a3571637871183d7091368ae/xlarge.jpg?1739638480</ThumbnailUrl>
  <ThumbnailUrl size="large">https://assets2-dev.my.umbc.edu/system/shared/thumbnails/news/000/147/361/e9801948a3571637871183d7091368ae/large.jpg?1739638480</ThumbnailUrl>
  <ThumbnailUrl size="medium">https://assets1-dev.my.umbc.edu/system/shared/thumbnails/news/000/147/361/e9801948a3571637871183d7091368ae/medium.jpg?1739638480</ThumbnailUrl>
  <ThumbnailUrl size="small">https://assets4-dev.my.umbc.edu/system/shared/thumbnails/news/000/147/361/e9801948a3571637871183d7091368ae/small.jpg?1739638480</ThumbnailUrl>
  <ThumbnailUrl size="xsmall">https://assets1-dev.my.umbc.edu/system/shared/thumbnails/news/000/147/361/e9801948a3571637871183d7091368ae/xsmall.jpg?1739638480</ThumbnailUrl>
  <ThumbnailUrl size="xxsmall">https://assets2-dev.my.umbc.edu/system/shared/thumbnails/news/000/147/361/e9801948a3571637871183d7091368ae/xxsmall.jpg?1739638480</ThumbnailUrl>
  <PawCount>0</PawCount>
  <CommentCount>0</CommentCount>
  <CommentsAllowed>true</CommentsAllowed>
  <PostedAt>Sat, 15 Feb 2025 12:03:24 -0500</PostedAt>
  <EditAt>Sat, 15 Feb 2025 12:51:40 -0500</EditAt>
</NewsItem>
  <NewsItem contentIssues="true" id="145742" important="false" status="posted" url="https://dev.my.umbc.edu/groups/umbc-ai/posts/145742">
  <Title>Strengthening Image Generative AI: Integrating Fingerprinting and Revision Methods for Enhanced Safety and Control</Title>
  <Tagline>4-5:15pm EST, Monday Nov 25, Math&amp;Psych 106 and online</Tagline>
  <Body>
    <![CDATA[
    <div class="html-content"><h4>Strengthening Image Generative AI: Integrating Fingerprinting and Revision Methods for Enhanced Safety and Control</h4><h4>4-5:15pm EST, Monday Nov 25, Math&amp;Psych 106 &amp; <a href="https://umbc.webex.com/meet/gokhale" rel="nofollow external" class="bo">online </a></h4><div><br></div><div><br></div><div>In the rapidly evolving field of <a href="https://en.wikipedia.org/wiki/Generative_artificial_intelligence" rel="nofollow external" class="bo"><strong>Generative Artificial Intelligence</strong></a> (Gen-AI) for imaging, models such as DALL·E3 and Stable Diffusion have transitioned from theoretical concepts to practical tools with significant impact across various sectors including entertainment, art, journalism, and education. These advancements represent a substantial technological evolution, enhancing creative and professional practices. However, the widespread accessibility of Gen-AI also facilitates misuse by malicious actors who create deepfakes and spread misinformation, posing serious risks to societal well-being and privacy. This talk will address these critical challenges by focusing on enhancing the reliability of Image Gen-AI models through the identification and mitigation of inherent vulnerabilities and the development of computational tools and frameworks for enabling better community oversight. The talk will describe the development of innovative fingerprinting techniques that trace malicious Gen-AI outputs back to their sources, and the implementation of strategies to prevent the generation of unauthorized content. These efforts collectively strengthen the robustness and accountability of Gen-AI technologies, particularly in sensitive applications.</div><div> </div><div><a href="https://www.changhoonkim.com/" rel="nofollow external" class="bo"><strong>Dr. Changhoon Kim</strong></a> is a Postdoctoral Scientist in the Bedrock Team at Amazon. He completed his Ph.D. in Computer Engineering at Arizona State University. His primary research focuses on the creation of secure machine learning systems. He has dedicated his efforts to developing user-attribution methods for generative models, a critical area of research in the age of AI-generated hyper-realistic content for tracing malicious usage, and machine unlearning for removing private or harmful content from AI models. Kim’s research has been recognized at prestigious conferences such as ICLR, ICML, ECCV, and CVPR, and a U.S. patent for user-attribution in generative models. To further contribute to the community, he has organized tutorials and workshops at leading conferences to emphasize the importance of secure generative AI.</div><div><br></div>
    <hr><a href="https://ai.umbc.edu/" rel="nofollow external" class="bo"><strong>UMBC Center for AI</strong></a></div>
]]>
  </Body>
  <Summary>Strengthening Image Generative AI: Integrating Fingerprinting and Revision Methods for Enhanced Safety and Control  4-5:15pm EST, Monday Nov 25, Math&amp;Psych 106 &amp; online         In the...</Summary>
  <Website>https://www.tejasgokhale.com/seminar.html</Website>
  <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/145742/guest@my.umbc.edu/c947837388de3f0bcf0c622cbdacb59a/api/pixel</TrackingUrl>
  <Tag>ai</Tag>
  <Tag>gen-ai</Tag>
  <Tag>images</Tag>
  <Tag>vision</Tag>
  <Group token="umbc-ai">UMBC AI</Group>
  <GroupUrl>https://dev.my.umbc.edu/groups/umbc-ai</GroupUrl>
  <AvatarUrl>https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
  <AvatarUrl size="original">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/original.png?1691095779</AvatarUrl>
  <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxlarge.png?1691095779</AvatarUrl>
  <AvatarUrl size="xlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xlarge.png?1691095779</AvatarUrl>
  <AvatarUrl size="large">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/large.png?1691095779</AvatarUrl>
  <AvatarUrl size="medium">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/medium.png?1691095779</AvatarUrl>
  <AvatarUrl size="small">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/small.png?1691095779</AvatarUrl>
  <AvatarUrl size="xsmall">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
  <AvatarUrl size="xxsmall">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxsmall.png?1691095779</AvatarUrl>
  <Sponsor>UMBC Cognitive Vision Group</Sponsor>
  <ThumbnailUrl size="xxlarge">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/145/742/90f95032fef2709fdedd4fabc6e2e03e/xxlarge.jpg?1732047123</ThumbnailUrl>
  <ThumbnailUrl size="xlarge">https://assets1-dev.my.umbc.edu/system/shared/thumbnails/news/000/145/742/90f95032fef2709fdedd4fabc6e2e03e/xlarge.jpg?1732047123</ThumbnailUrl>
  <ThumbnailUrl size="large">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/145/742/90f95032fef2709fdedd4fabc6e2e03e/large.jpg?1732047123</ThumbnailUrl>
  <ThumbnailUrl size="medium">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/145/742/90f95032fef2709fdedd4fabc6e2e03e/medium.jpg?1732047123</ThumbnailUrl>
  <ThumbnailUrl size="small">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/145/742/90f95032fef2709fdedd4fabc6e2e03e/small.jpg?1732047123</ThumbnailUrl>
  <ThumbnailUrl size="xsmall">https://assets1-dev.my.umbc.edu/system/shared/thumbnails/news/000/145/742/90f95032fef2709fdedd4fabc6e2e03e/xsmall.jpg?1732047123</ThumbnailUrl>
  <ThumbnailUrl size="xxsmall">https://assets2-dev.my.umbc.edu/system/shared/thumbnails/news/000/145/742/90f95032fef2709fdedd4fabc6e2e03e/xxsmall.jpg?1732047123</ThumbnailUrl>
  <PawCount>1</PawCount>
  <CommentCount>0</CommentCount>
  <CommentsAllowed>true</CommentsAllowed>
  <PostedAt>Tue, 19 Nov 2024 15:30:06 -0500</PostedAt>
</NewsItem>
  <NewsItem contentIssues="false" id="144588" important="false" status="posted" url="https://dev.my.umbc.edu/groups/umbc-ai/posts/144588">
  <Title>Talk today on AI for Event-Centric Video Retrieval, 1:30pm in ITE 325b</Title>
  <Body>
    <![CDATA[
    <div class="html-content"><span><span>If you are interested in a challenging AI problem involving integrated spoken language and video understanding, Reno Kriz from JHU will discuss the results of a large summer project focused on finding videos about specific current events. His presentation will be at 1:30 p.m. today (Tuesday, 10/8) in ITE 325b and also online. Register and get more information </span><a href="https://my3.my.umbc.edu/groups/langtech/events/134555" rel="nofollow external" class="bo"><span>here</span></a><span>.</span></span></div>
]]>
  </Body>
  <Summary>If you are interested in a challenging AI problem involving integrated spoken language and video understanding, Reno Kriz from JHU will discuss the results of a large summer project focused on...</Summary>
  <Website>https://my3.my.umbc.edu/groups/langtech/events/134555</Website>
  <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/144588/guest@my.umbc.edu/dfbc935fe3eaaed0ea7269f34ad72685/api/pixel</TrackingUrl>
  <Tag>ai</Tag>
  <Tag>audio</Tag>
  <Tag>nlp</Tag>
  <Tag>text</Tag>
  <Tag>video</Tag>
  <Tag>vision</Tag>
  <Group token="umbc-ai">UMBC AI</Group>
  <GroupUrl>https://dev.my.umbc.edu/groups/umbc-ai</GroupUrl>
  <AvatarUrl>https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
  <AvatarUrl size="original">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/original.png?1691095779</AvatarUrl>
  <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxlarge.png?1691095779</AvatarUrl>
  <AvatarUrl size="xlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xlarge.png?1691095779</AvatarUrl>
  <AvatarUrl size="large">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/large.png?1691095779</AvatarUrl>
  <AvatarUrl size="medium">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/medium.png?1691095779</AvatarUrl>
  <AvatarUrl size="small">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/small.png?1691095779</AvatarUrl>
  <AvatarUrl size="xsmall">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
  <AvatarUrl size="xxsmall">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxsmall.png?1691095779</AvatarUrl>
  <Sponsor>Language Technology Seminar Series</Sponsor>
  <PawCount>0</PawCount>
  <CommentCount>0</CommentCount>
  <CommentsAllowed>true</CommentsAllowed>
  <PostedAt>Tue, 08 Oct 2024 10:10:00 -0400</PostedAt>
</NewsItem>
  <NewsItem contentIssues="true" id="141164" important="false" status="posted" url="https://dev.my.umbc.edu/groups/umbc-ai/posts/141164">
  <Title>Talk: Visible-Thermal Image Registration &amp; Translation, 4/24</Title>
  <Tagline>4-5:15 pm ET, Wed., April 24, 2024 in ENGR 231 and online</Tagline>
  <Body>
    <![CDATA[
    <div class="html-content"><img src="https://ai.umbc.edu/wp-content/uploads/sites/734/2024/04/ordun.jpg" style="max-width: 100%; height: auto;"><div><br></div><div><div><strong>Visible-Thermal Image Registration and Translation for Remote Medical Applications</strong></div><div><br></div><div><strong><a href="https://www.linkedin.com/in/catherine-ordun/" rel="nofollow external" class="bo">Catherine Ordun</a>, Booz Allen Hamilton</strong></div><div><br></div><div><strong>4-5:15 pm ET, Wednesday, April 24, 2024</strong></div><div><strong>UMBC, ENGR 231 and <a href="https://umbc.webex.com/meet/gokhale" rel="nofollow external" class="bo">Webex</a></strong></div><div><br></div><div>Thermal imagery captured in the Long Wave Infrared (LWIR) spectrum has long-played a vital role in thermal physiology. Signs of stress and inflammation which are unseen in the visible spectrum, can be detected in LWIR due to principles of blackbody radiation. As a result, thermal facial imagery provides a unique modality for physiological assessment of states such as chronic pain. In this presentation, I will provide a presentation of my research into image registration to align visible-thermal images that serve as a prerequisite for image- to-image translation using conditional <a href="https://en.wikipedia.org/wiki/Generative_adversarial_network" rel="nofollow external" class="bo">GANs</a> and <a href="https://en.wikipedia.org/wiki/Diffusion_model" rel="nofollow external" class="bo">Diffusion Models</a>. I will share recent work leading research with the National Institutes of Health applying this research in a real-world setting on cancer patients suffering from chronic pain.</div><div><br></div><div><a href="https://www.linkedin.com/in/catherine-ordun/" rel="nofollow external" class="bo">Dr. Catherine Ordun</a> is a Vice President at Booz Allen Hamilton, leading AI Rapid Prototyping and Tech Transfer solutions for mission-critical problems for the Federal Government. She drives AI rapid prototyping to support mission-critical proof-of-concepts across multiple AI domains, in addition to AI tech transfer to support algorithm reuse and consumption. She also leads multimodal AI research supporting the National Cancer Institute for chronic cancer pain detection. Dr. Ordun is a Ph.D. graduate of the UMBC Department of Information Systems advised by Drs. Sanjay Purushotham and Edward Raff, and obtained her bachelors degree from Georgia Tech, masters from Emory, and an MBA from GWU Business School. She also has an appointment at UMBC as Adjunct Research Assistant Professor.</div></div><div><br><hr><a href="https://ai.umbc.edu/" rel="nofollow external" class="bo">UMBC Center for AI</a></div></div>
]]>
  </Body>
  <Summary>Visible-Thermal Image Registration and Translation for Remote Medical Applications     Catherine Ordun, Booz Allen Hamilton     4-5:15 pm ET, Wednesday, April 24, 2024  UMBC, ENGR 231 and Webex...</Summary>
  <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/141164/guest@my.umbc.edu/192aa5ef424eb6ba21ca5dace98211c5/api/pixel</TrackingUrl>
  <Tag>ai</Tag>
  <Tag>diffusion-model</Tag>
  <Tag>gan</Tag>
  <Tag>healthcare</Tag>
  <Tag>image-processing</Tag>
  <Tag>long-wave-infrared</Tag>
  <Tag>vision</Tag>
  <Group token="umbc-ai">UMBC AI</Group>
  <GroupUrl>https://dev.my.umbc.edu/groups/umbc-ai</GroupUrl>
  <AvatarUrl>https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
  <AvatarUrl size="original">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/original.png?1691095779</AvatarUrl>
  <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxlarge.png?1691095779</AvatarUrl>
  <AvatarUrl size="xlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xlarge.png?1691095779</AvatarUrl>
  <AvatarUrl size="large">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/large.png?1691095779</AvatarUrl>
  <AvatarUrl size="medium">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/medium.png?1691095779</AvatarUrl>
  <AvatarUrl size="small">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/small.png?1691095779</AvatarUrl>
  <AvatarUrl size="xsmall">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
  <AvatarUrl size="xxsmall">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxsmall.png?1691095779</AvatarUrl>
  <Sponsor>UMBC AI</Sponsor>
  <PawCount>0</PawCount>
  <CommentCount>0</CommentCount>
  <CommentsAllowed>true</CommentsAllowed>
  <PostedAt>Mon, 22 Apr 2024 08:20:56 -0400</PostedAt>
  <EditAt>Mon, 22 Apr 2024 08:36:42 -0400</EditAt>
</NewsItem>
  <NewsItem contentIssues="true" id="140902" important="false" status="posted" url="https://dev.my.umbc.edu/groups/umbc-ai/posts/140902">
  <Title>Talk: Learning to Synthesize Images, 4-5:15pm ET, Wed. 4/17</Title>
  <Tagline>Advances in Perception, Prediction &amp; Reasoning seminar</Tagline>
  <Body>
    <![CDATA[
    <div class="html-content"><span><h4><span><strong>Learning to Synthesize Images </strong></span><span><strong>with Multimodal and Hierarchical </strong></span><span><strong>Inputs</strong></span></h4><h4><strong><a href="https://zharry29.github.io/" rel="nofollow external" class="bo">Yu Zeng</a>, JHU </strong></h4><p><strong>April 17, 2024 4:00 – 5:15 PM</strong></p><p><span><strong>ENGR 231, UMBC or <a href="https://umbc.webex.com/meet/gokhale" rel="nofollow external" class="bo">Webex</a></strong></span></p><div><span><br></span></div><br><p><span>In recent years, image synthesis and manipulation has experienced remarkable advancements driven by deep learning algorithms and web-scale data, yet there persists a notable disconnect between the intricate nature of human ideas and the simplistic input structures employed by the existing models. In this talk, I will present our research towards a more natural way for controllable image synthesis inspired by the coarse-to-fine workflow of human artists and the inherently multimodal aspect of human thought processes. We consider the inputs of semantic and visual modality at varying levels of hierarchy. For the semantic modality, we introduce a general framework for modeling semantic inputs of different levels, which includes image-level text prompts and pixel-level label maps as two extremes and brings a series of mid-level regional descriptions with different precision. For the visual modality, we explore the use of low-level and high-level visual inputs aligning with the natural hierarchy of visual processing. Additionally, as the misuse of generated images becomes a societal threat, I will introduce our findings on the trustworthiness of deep generative models in the second part of this talk and potential future research directions.</span></p><br><p><span><strong><a href="https://zharry29.github.io/" rel="nofollow external" class="bo">Yu Zeng</a></strong> is a Ph.D. candidate at Johns Hopkins University advised by Vishal M. Patel. Her research interest lies in computer vision and deep learning. She has focused on two main areas: (1) deep generative models for image synthesis and editing and (2) label-efficient deep learning. By combining these research areas, she aims to bridge human creativity and machine intelligence through user-friendly and socially responsible models while minimizing the need for intensive human supervision. Yuhas collaborated with researchers at NVIDIA and Adobe through internships. Prior to her Ph.D., she worked as a researcher at Tencent Games. Yu’s research has been recognized by the KAUST Rising Stars in AI, and her Ph.D. study has been supported by a JHU Kewei Yang and Grace Xin Fellowship.</span></p><br></span><div><span>• </span><a href="http://ai.umbc.edu/" rel="nofollow external" class="bo">ai.umbc.edu</a><span> •</span></div></div>
]]>
  </Body>
  <Summary>Learning to Synthesize Images with Multimodal and Hierarchical Inputs  Yu Zeng, JHU   April 17, 2024 4:00 – 5:15 PM  ENGR 231, UMBC or Webex      In recent years, image synthesis and manipulation...</Summary>
  <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/140902/guest@my.umbc.edu/dc88976cc5ca6f2a10576069fe0f9d31/api/pixel</TrackingUrl>
  <Tag>ai</Tag>
  <Tag>images</Tag>
  <Tag>multimodal</Tag>
  <Tag>vision</Tag>
  <Group token="umbc-ai">UMBC AI</Group>
  <GroupUrl>https://dev.my.umbc.edu/groups/umbc-ai</GroupUrl>
  <AvatarUrl>https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
  <AvatarUrl size="original">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/original.png?1691095779</AvatarUrl>
  <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxlarge.png?1691095779</AvatarUrl>
  <AvatarUrl size="xlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xlarge.png?1691095779</AvatarUrl>
  <AvatarUrl size="large">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/large.png?1691095779</AvatarUrl>
  <AvatarUrl size="medium">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/medium.png?1691095779</AvatarUrl>
  <AvatarUrl size="small">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/small.png?1691095779</AvatarUrl>
  <AvatarUrl size="xsmall">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
  <AvatarUrl size="xxsmall">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxsmall.png?1691095779</AvatarUrl>
  <Sponsor>UMBC AI</Sponsor>
  <ThumbnailUrl size="xxlarge">https://assets2-dev.my.umbc.edu/system/shared/thumbnails/news/000/140/902/2c74685cec3a52e4a7092bec7876e18d/xxlarge.jpg?1713184563</ThumbnailUrl>
  <ThumbnailUrl size="xlarge">https://assets4-dev.my.umbc.edu/system/shared/thumbnails/news/000/140/902/2c74685cec3a52e4a7092bec7876e18d/xlarge.jpg?1713184563</ThumbnailUrl>
  <ThumbnailUrl size="large">https://assets1-dev.my.umbc.edu/system/shared/thumbnails/news/000/140/902/2c74685cec3a52e4a7092bec7876e18d/large.jpg?1713184563</ThumbnailUrl>
  <ThumbnailUrl size="medium">https://assets2-dev.my.umbc.edu/system/shared/thumbnails/news/000/140/902/2c74685cec3a52e4a7092bec7876e18d/medium.jpg?1713184563</ThumbnailUrl>
  <ThumbnailUrl size="small">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/140/902/2c74685cec3a52e4a7092bec7876e18d/small.jpg?1713184563</ThumbnailUrl>
  <ThumbnailUrl size="xsmall">https://assets1-dev.my.umbc.edu/system/shared/thumbnails/news/000/140/902/2c74685cec3a52e4a7092bec7876e18d/xsmall.jpg?1713184563</ThumbnailUrl>
  <ThumbnailUrl size="xxsmall">https://assets4-dev.my.umbc.edu/system/shared/thumbnails/news/000/140/902/2c74685cec3a52e4a7092bec7876e18d/xxsmall.jpg?1713184563</ThumbnailUrl>
  <PawCount>0</PawCount>
  <CommentCount>0</CommentCount>
  <CommentsAllowed>true</CommentsAllowed>
  <PostedAt>Mon, 15 Apr 2024 08:44:49 -0400</PostedAt>
  <EditAt>Tue, 16 Apr 2024 22:06:11 -0400</EditAt>
</NewsItem>
  <NewsItem contentIssues="true" id="139362" important="false" status="posted" url="https://dev.my.umbc.edu/groups/umbc-ai/posts/139362">
  <Title>UMBC Prof. Tejas Gokhale gives new faculty talk at AAAI 2024</Title>
  <Tagline>Robust Visual Understanding: from Recognition to Reasoning</Tagline>
  <Body>
    <![CDATA[
    <div class="html-content"><img src="https://ai.umbc.edu/wp-content/uploads/sites/734/2024/02/ejas-Gokhale_aaai24.png" style="max-width: 100%; height: auto;"><div><br></div><div><div><span>UMBC Assistant Professor <a href="https://www.tejasgokhale.com/" rel="nofollow external" class="bo"><strong>Tejas Gokhale</strong></a> delivered an invited talk at the New Faculty Highlights session at the <a href="https://aaai.org/aaai-conference/" rel="nofollow external" class="bo"><strong>2024 AAAI conference</strong></a>. The talk, "Towards Robust Visual Understanding: from Recognition to Reasoning", described his research on building perception and reasoning models with a focus on benchmarking and improving robustness of data-driven visual understanding systems. AAAI's <strong><a href="https://aaai.org/aaai-conference/nfh-24-program/" rel="nofollow external" class="bo">New Faculty Highlights</a></strong> session recognizes and invites new AI faculty from across the globe to present their research and contribute an article to <strong><a href="https://aaai.org/ai-magazine/" rel="nofollow external" class="bo">AI Magazine</a>.</strong></span></div></div></div>
]]>
  </Body>
  <Summary>UMBC Assistant Professor Tejas Gokhale delivered an invited talk at the New Faculty Highlights session at the 2024 AAAI conference. The talk, "Towards Robust Visual Understanding: from Recognition...</Summary>
  <Website>https://my3.my.umbc.edu/groups/csee/posts/139304</Website>
  <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/139362/guest@my.umbc.edu/3ee8124655abb784247cc061b67273c0/api/pixel</TrackingUrl>
  <Tag>ai</Tag>
  <Tag>reasoning</Tag>
  <Tag>vision</Tag>
  <Group token="umbc-ai">UMBC AI</Group>
  <GroupUrl>https://dev.my.umbc.edu/groups/umbc-ai</GroupUrl>
  <AvatarUrl>https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
  <AvatarUrl size="original">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/original.png?1691095779</AvatarUrl>
  <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxlarge.png?1691095779</AvatarUrl>
  <AvatarUrl size="xlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xlarge.png?1691095779</AvatarUrl>
  <AvatarUrl size="large">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/large.png?1691095779</AvatarUrl>
  <AvatarUrl size="medium">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/medium.png?1691095779</AvatarUrl>
  <AvatarUrl size="small">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/small.png?1691095779</AvatarUrl>
  <AvatarUrl size="xsmall">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
  <AvatarUrl size="xxsmall">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxsmall.png?1691095779</AvatarUrl>
  <Sponsor>UMBC AI</Sponsor>
  <PawCount>0</PawCount>
  <CommentCount>0</CommentCount>
  <CommentsAllowed>true</CommentsAllowed>
  <PostedAt>Tue, 27 Feb 2024 15:45:10 -0500</PostedAt>
  <EditAt>Tue, 27 Feb 2024 15:58:09 -0500</EditAt>
</NewsItem>
  <NewsItem contentIssues="false" id="139035" important="false" status="posted" url="https://dev.my.umbc.edu/groups/umbc-ai/posts/139035">
  <Title>Talk: Creative Visual Storytelling, Fri 2/23, 2:30-3:30</Title>
  <Tagline>Laying the Foundations and Pushing the Boundaries</Tagline>
  <Body>
    <![CDATA[
    <div class="html-content"><div><span><strong>Creative Visual Storytelling: Laying the </strong></span></div><div><strong>Foundations and Pushing the Boundaries</strong></div><div><br></div><div><a href="https://www.linkedin.com/in/stephanie-m-lukin/" rel="nofollow external" class="bo"><strong>Dr. Stephanie Lukin</strong></a></div><div><strong>U.S. Army Research Laboratory</strong></div><div><strong><br></strong></div><div><strong><span>2:30-3:30 pm, </span>Friday, February 23, 2024, <span>ITE 325b</span></strong></div><div><br></div><div>Creative visual storytelling - that is, the creative task of storytelling based on visual input - involves both assigning meaning to the visual input and conveying that meaning in story form.  The resulting stories are more than literal descriptions of events or scenery: they contain narrative arcs with characters, goals, and conflicts in potentially endless circumstances. In this talk, I will lay out my research exploring the foundations of creative visual storytelling and automating this novel type of storytelling. I examine three properties critical to such systems and the narratives they generate: the systems are highly expressive, their productive capability is key to problem solving and establishing story frames; the systems are responsible, the narratives they generate are grounded in the source material and avoid biases; and the system narratives are "co-constructive" with a human partner, they enable interlocutors to share common ground of experiences in different physical spaces across time through evolving events.</div><div><br></div><div>Bio: <a href="https://www.linkedin.com/in/stephanie-m-lukin/" rel="nofollow external" class="bo">Dr. Stephanie Lukin</a> is a Computer Scientist at the U.S. Army Research Laboratory's Los Angeles regional site (ARL-West). She holds a Ph.D. and M.S. in Computer Science from the University of California Santa Cruz, and had interned at Xerox PARC, Microsoft Research, and Google before joining ARL in 2017. Dr. Lukin specializes in narrative intelligence, examining the multi-modal interactions between humans and robots, and how stories can be told from the myriad of multi-modal data surrounding us.</div></div>
]]>
  </Body>
  <Summary>Creative Visual Storytelling: Laying the   Foundations and Pushing the Boundaries     Dr. Stephanie Lukin  U.S. Army Research Laboratory     2:30-3:30 pm, Friday, February 23, 2024, ITE 325b...</Summary>
  <Website>https://dl.acm.org/doi/pdf/10.1145/3544548.3580744</Website>
  <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/139035/guest@my.umbc.edu/b8b2322bfc90c1ee89238c195cb2b254/api/pixel</TrackingUrl>
  <Tag>ai</Tag>
  <Tag>vision</Tag>
  <Group token="umbc-ai">UMBC AI</Group>
  <GroupUrl>https://dev.my.umbc.edu/groups/umbc-ai</GroupUrl>
  <AvatarUrl>https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
  <AvatarUrl size="original">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/original.png?1691095779</AvatarUrl>
  <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxlarge.png?1691095779</AvatarUrl>
  <AvatarUrl size="xlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xlarge.png?1691095779</AvatarUrl>
  <AvatarUrl size="large">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/large.png?1691095779</AvatarUrl>
  <AvatarUrl size="medium">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/medium.png?1691095779</AvatarUrl>
  <AvatarUrl size="small">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/small.png?1691095779</AvatarUrl>
  <AvatarUrl size="xsmall">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
  <AvatarUrl size="xxsmall">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxsmall.png?1691095779</AvatarUrl>
  <Sponsor>UMBC AI</Sponsor>
  <PawCount>0</PawCount>
  <CommentCount>0</CommentCount>
  <CommentsAllowed>true</CommentsAllowed>
  <PostedAt>Fri, 16 Feb 2024 09:49:52 -0500</PostedAt>
</NewsItem>
  <NewsItem contentIssues="true" id="138553" important="false" status="posted" url="https://dev.my.umbc.edu/groups/umbc-ai/posts/138553">
  <Title>Talk: Visual Concept Learning Beyond Appearances, 3:30pm 2/8</Title>
  <Tagline>Modernizing a couple of classic ideas</Tagline>
  <Body>
    <![CDATA[
    <div class="html-content"><h5>PPR Distinguished Speaker</h5><div><br></div><h4>Visual Concept Learning Beyond Appearances: Modernizing a Couple of Classic Ideas</h4><h5><a href="https://yezhouyang.engineering.asu.edu/" rel="nofollow external" class="bo">Dr. Yezhou Yang</a><br>Arizona State University</h5><div><br></div><h5>3:30-4:45 pm ET, Thur. Feb. 8, 2024</h5><h5>ITE 325b &amp; via <a href="https://umbc.webex.com/meet/gokhale" rel="nofollow external" class="bo">WebEx</a></h5><div><br></div><div>The goal of <a href="https://en.wikipedia.org/wiki/Computer_vision" rel="nofollow external" class="bo">Computer Vision</a>, as coined by <a href="https://en.wikipedia.org/wiki/David_Marr_(neuroscientist)" rel="nofollow external" class="bo">Marr</a>, is to develop algorithms to answer "What are", "Where at", "When from" visual appearance. The speaker, among others, recognizes the importance of studying underlying entities and relations beyond visual appearance, following an Active Perception paradigm. This talk will present the speaker's efforts over the last decade, ranging from 1) reasoning beyond appearance for vision and language tasks (VQA, captioning, T2I, etc.), and addressing their evaluation misalignment, through 2) reasoning about implicit properties, to 3) their roles in a Robotic visual concept learning framework. The talk will also feature the Active Perception Group (APG)'s projects addressing emerging challenges of the nation in automated mobility and intelligent transportation domains, at the ASU School of Computing and Augmented Intelligence (SCAI).</div><div><br></div><div><a href="https://yezhouyang.engineering.asu.edu/" rel="nofollow external" class="bo"><strong>Yezhou (YZ) Yang</strong></a> is an Associate Professor and a Fulton Entrepreneurial Professor in the School of Computing and Augmented Intelligence (SCAI) at Arizona State University. He founded and directs the ASU Active Perception Group, and currently serves as the topic lead (situation awareness) at the Institute of Automated Mobility, Arizona Commerce Authority. He is also a thrust lead (AVAI) at Advanced Communications Technologies (ACT, a Science and Technology Center under the New Economy Initiative, Arizona). His work includes exploring visual primitives and representation learning in visual (and language) understanding, grounding them by natural language and high-level reasoning over the primitives for intelligent systems, secure/robust AI, and V&amp;L model evaluation alignment. Yang is a recipient of the Qualcomm Innovation Fellowship 2011, the NSF CAREER award 2018, and the Amazon AWS Machine Learning Research Award 2019. He received his Ph.D. from the University of Maryland at College Park, and B.E. from Zhejiang University, China. He is a co- founder of ARGOS Vision Inc, an ASU spin-off company.</div><div><br></div><div>The Advances in Perception, Prediction, and Reasoning (PPR) talks are organized and hosted by UMBC Professor <a href="https://www.tejasgokhale.com/" rel="nofollow external" class="bo">Tejas Gokhale</a>.</div><div><br></div></div>
]]>
  </Body>
  <Summary>PPR Distinguished Speaker     Visual Concept Learning Beyond Appearances: Modernizing a Couple of Classic Ideas  Dr. Yezhou Yang Arizona State University     3:30-4:45 pm ET, Thur. Feb. 8, 2024...</Summary>
  <Website>https://www.tejasgokhale.com/seminar.html</Website>
  <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/138553/guest@my.umbc.edu/78889d31153122f68f1a32b354be22a0/api/pixel</TrackingUrl>
  <Tag>ai</Tag>
  <Tag>concepts</Tag>
  <Tag>vision</Tag>
  <Group token="umbc-ai">UMBC AI</Group>
  <GroupUrl>https://dev.my.umbc.edu/groups/umbc-ai</GroupUrl>
  <AvatarUrl>https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
  <AvatarUrl size="original">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/original.png?1691095779</AvatarUrl>
  <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxlarge.png?1691095779</AvatarUrl>
  <AvatarUrl size="xlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xlarge.png?1691095779</AvatarUrl>
  <AvatarUrl size="large">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/large.png?1691095779</AvatarUrl>
  <AvatarUrl size="medium">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/medium.png?1691095779</AvatarUrl>
  <AvatarUrl size="small">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/small.png?1691095779</AvatarUrl>
  <AvatarUrl size="xsmall">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
  <AvatarUrl size="xxsmall">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxsmall.png?1691095779</AvatarUrl>
  <Sponsor>Advances in Perception, Prediction, and Reasoning Lab</Sponsor>
  <ThumbnailUrl size="xxlarge">https://assets2-dev.my.umbc.edu/system/shared/thumbnails/news/000/138/553/a86785c3911819cf5e537c492b14aa74/xxlarge.jpg?1706728119</ThumbnailUrl>
  <ThumbnailUrl size="xlarge">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/138/553/a86785c3911819cf5e537c492b14aa74/xlarge.jpg?1706728119</ThumbnailUrl>
  <ThumbnailUrl size="large">https://assets1-dev.my.umbc.edu/system/shared/thumbnails/news/000/138/553/a86785c3911819cf5e537c492b14aa74/large.jpg?1706728119</ThumbnailUrl>
  <ThumbnailUrl size="medium">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/138/553/a86785c3911819cf5e537c492b14aa74/medium.jpg?1706728119</ThumbnailUrl>
  <ThumbnailUrl size="small">https://assets4-dev.my.umbc.edu/system/shared/thumbnails/news/000/138/553/a86785c3911819cf5e537c492b14aa74/small.jpg?1706728119</ThumbnailUrl>
  <ThumbnailUrl size="xsmall">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/138/553/a86785c3911819cf5e537c492b14aa74/xsmall.jpg?1706728119</ThumbnailUrl>
  <ThumbnailUrl size="xxsmall">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/138/553/a86785c3911819cf5e537c492b14aa74/xxsmall.jpg?1706728119</ThumbnailUrl>
  <PawCount>0</PawCount>
  <CommentCount>0</CommentCount>
  <CommentsAllowed>true</CommentsAllowed>
  <PostedAt>Wed, 31 Jan 2024 14:22:06 -0500</PostedAt>
  <EditAt>Tue, 27 Feb 2024 17:40:25 -0500</EditAt>
</NewsItem>
</News>
