<?xml version="1.0"?>
<News hasArchived="false" page="1" pageCount="1" pageSize="10" timestamp="Fri, 24 Apr 2026 10:29:01 -0400" url="https://dev.my.umbc.edu/groups/umbc-ai/posts.xml?tag=images">
  <NewsItem contentIssues="true" id="145742" important="false" status="posted" url="https://dev.my.umbc.edu/groups/umbc-ai/posts/145742">
  <Title>Strengthening Image Generative AI: Integrating Fingerprinting and Revision Methods for Enhanced Safety and Control</Title>
  <Tagline>4-5:15pm EST, Monday Nov 25, Math&amp;Psych 106 and online</Tagline>
  <Body>
    <![CDATA[
    <div class="html-content"><h4>Strengthening Image Generative AI: Integrating Fingerprinting and Revision Methods for Enhanced Safety and Control</h4><h4>4-5:15pm EST, Monday Nov 25, Math&amp;Psych 106 &amp; <a href="https://umbc.webex.com/meet/gokhale" rel="nofollow external" class="bo">online </a></h4><div><br></div><div><br></div><div>In the rapidly evolving field of <a href="https://en.wikipedia.org/wiki/Generative_artificial_intelligence" rel="nofollow external" class="bo"><strong>Generative Artificial Intelligence</strong></a> (Gen-AI) for imaging, models such as DALL·E3 and Stable Diffusion have transitioned from theoretical concepts to practical tools with significant impact across various sectors including entertainment, art, journalism, and education. These advancements represent a substantial technological evolution, enhancing creative and professional practices. However, the widespread accessibility of Gen-AI also facilitates misuse by malicious actors who create deepfakes and spread misinformation, posing serious risks to societal well-being and privacy. This talk will address these critical challenges by focusing on enhancing the reliability of Image Gen-AI models through the identification and mitigation of inherent vulnerabilities and the development of computational tools and frameworks for enabling better community oversight. The talk will describe the development of innovative fingerprinting techniques that trace malicious Gen-AI outputs back to their sources, and the implementation of strategies to prevent the generation of unauthorized content. These efforts collectively strengthen the robustness and accountability of Gen-AI technologies, particularly in sensitive applications.</div><div> </div><div><a href="https://www.changhoonkim.com/" rel="nofollow external" class="bo"><strong>Dr. Changhoon Kim</strong></a> is a Postdoctoral Scientist in the Bedrock Team at Amazon. He completed his Ph.D. in Computer Engineering at Arizona State University. His primary research focuses on the creation of secure machine learning systems. He has dedicated his efforts to developing user-attribution methods for generative models, a critical area of research in the age of AI-generated hyper-realistic content for tracing malicious usage, and machine unlearning for removing private or harmful content from AI models. Kim’s research has been recognized at prestigious conferences such as ICLR, ICML, ECCV, and CVPR, and a U.S. patent for user-attribution in generative models. To further contribute to the community, he has organized tutorials and workshops at leading conferences to emphasize the importance of secure generative AI.</div><div><br></div>
    <hr><a href="https://ai.umbc.edu/" rel="nofollow external" class="bo"><strong>UMBC Center for AI</strong></a></div>
]]>
  </Body>
  <Summary>Strengthening Image Generative AI: Integrating Fingerprinting and Revision Methods for Enhanced Safety and Control  4-5:15pm EST, Monday Nov 25, Math&amp;Psych 106 &amp; online         In the...</Summary>
  <Website>https://www.tejasgokhale.com/seminar.html</Website>
  <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/145742/guest@my.umbc.edu/c947837388de3f0bcf0c622cbdacb59a/api/pixel</TrackingUrl>
  <Tag>ai</Tag>
  <Tag>gen-ai</Tag>
  <Tag>images</Tag>
  <Tag>vision</Tag>
  <Group token="umbc-ai">UMBC AI</Group>
  <GroupUrl>https://dev.my.umbc.edu/groups/umbc-ai</GroupUrl>
  <AvatarUrl>https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
  <AvatarUrl size="original">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/original.png?1691095779</AvatarUrl>
  <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxlarge.png?1691095779</AvatarUrl>
  <AvatarUrl size="xlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xlarge.png?1691095779</AvatarUrl>
  <AvatarUrl size="large">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/large.png?1691095779</AvatarUrl>
  <AvatarUrl size="medium">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/medium.png?1691095779</AvatarUrl>
  <AvatarUrl size="small">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/small.png?1691095779</AvatarUrl>
  <AvatarUrl size="xsmall">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
  <AvatarUrl size="xxsmall">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxsmall.png?1691095779</AvatarUrl>
  <Sponsor>UMBC Cognitive Vision Group</Sponsor>
  <ThumbnailUrl size="xxlarge">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/145/742/90f95032fef2709fdedd4fabc6e2e03e/xxlarge.jpg?1732047123</ThumbnailUrl>
  <ThumbnailUrl size="xlarge">https://assets1-dev.my.umbc.edu/system/shared/thumbnails/news/000/145/742/90f95032fef2709fdedd4fabc6e2e03e/xlarge.jpg?1732047123</ThumbnailUrl>
  <ThumbnailUrl size="large">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/145/742/90f95032fef2709fdedd4fabc6e2e03e/large.jpg?1732047123</ThumbnailUrl>
  <ThumbnailUrl size="medium">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/145/742/90f95032fef2709fdedd4fabc6e2e03e/medium.jpg?1732047123</ThumbnailUrl>
  <ThumbnailUrl size="small">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/145/742/90f95032fef2709fdedd4fabc6e2e03e/small.jpg?1732047123</ThumbnailUrl>
  <ThumbnailUrl size="xsmall">https://assets1-dev.my.umbc.edu/system/shared/thumbnails/news/000/145/742/90f95032fef2709fdedd4fabc6e2e03e/xsmall.jpg?1732047123</ThumbnailUrl>
  <ThumbnailUrl size="xxsmall">https://assets2-dev.my.umbc.edu/system/shared/thumbnails/news/000/145/742/90f95032fef2709fdedd4fabc6e2e03e/xxsmall.jpg?1732047123</ThumbnailUrl>
  <PawCount>1</PawCount>
  <CommentCount>0</CommentCount>
  <CommentsAllowed>true</CommentsAllowed>
  <PostedAt>Tue, 19 Nov 2024 15:30:06 -0500</PostedAt>
</NewsItem>
  <NewsItem contentIssues="true" id="140902" important="false" status="posted" url="https://dev.my.umbc.edu/groups/umbc-ai/posts/140902">
  <Title>Talk: Learning to Synthesize Images, 4-5:15pm ET, Wed. 4/17</Title>
  <Tagline>Advances in Perception, Prediction &amp; Reasoning seminar</Tagline>
  <Body>
    <![CDATA[
    <div class="html-content"><span><h4><span><strong>Learning to Synthesize Images </strong></span><span><strong>with Multimodal and Hierarchical </strong></span><span><strong>Inputs</strong></span></h4><h4><strong><a href="https://zharry29.github.io/" rel="nofollow external" class="bo">Yu Zeng</a>, JHU </strong></h4><p><strong>April 17, 2024 4:00 – 5:15 PM</strong></p><p><span><strong>ENGR 231, UMBC or <a href="https://umbc.webex.com/meet/gokhale" rel="nofollow external" class="bo">Webex</a></strong></span></p><div><span><br></span></div><br><p><span>In recent years, image synthesis and manipulation has experienced remarkable advancements driven by deep learning algorithms and web-scale data, yet there persists a notable disconnect between the intricate nature of human ideas and the simplistic input structures employed by the existing models. In this talk, I will present our research towards a more natural way for controllable image synthesis inspired by the coarse-to-fine workflow of human artists and the inherently multimodal aspect of human thought processes. We consider the inputs of semantic and visual modality at varying levels of hierarchy. For the semantic modality, we introduce a general framework for modeling semantic inputs of different levels, which includes image-level text prompts and pixel-level label maps as two extremes and brings a series of mid-level regional descriptions with different precision. For the visual modality, we explore the use of low-level and high-level visual inputs aligning with the natural hierarchy of visual processing. Additionally, as the misuse of generated images becomes a societal threat, I will introduce our findings on the trustworthiness of deep generative models in the second part of this talk and potential future research directions.</span></p><br><p><span><strong><a href="https://zharry29.github.io/" rel="nofollow external" class="bo">Yu Zeng</a></strong> is a Ph.D. candidate at Johns Hopkins University advised by Vishal M. Patel. Her research interest lies in computer vision and deep learning. She has focused on two main areas: (1) deep generative models for image synthesis and editing and (2) label-efficient deep learning. By combining these research areas, she aims to bridge human creativity and machine intelligence through user-friendly and socially responsible models while minimizing the need for intensive human supervision. Yuhas collaborated with researchers at NVIDIA and Adobe through internships. Prior to her Ph.D., she worked as a researcher at Tencent Games. Yu’s research has been recognized by the KAUST Rising Stars in AI, and her Ph.D. study has been supported by a JHU Kewei Yang and Grace Xin Fellowship.</span></p><br></span><div><span>• </span><a href="http://ai.umbc.edu/" rel="nofollow external" class="bo">ai.umbc.edu</a><span> •</span></div></div>
]]>
  </Body>
  <Summary>Learning to Synthesize Images with Multimodal and Hierarchical Inputs  Yu Zeng, JHU   April 17, 2024 4:00 – 5:15 PM  ENGR 231, UMBC or Webex      In recent years, image synthesis and manipulation...</Summary>
  <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/140902/guest@my.umbc.edu/dc88976cc5ca6f2a10576069fe0f9d31/api/pixel</TrackingUrl>
  <Tag>ai</Tag>
  <Tag>images</Tag>
  <Tag>multimodal</Tag>
  <Tag>vision</Tag>
  <Group token="umbc-ai">UMBC AI</Group>
  <GroupUrl>https://dev.my.umbc.edu/groups/umbc-ai</GroupUrl>
  <AvatarUrl>https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
  <AvatarUrl size="original">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/original.png?1691095779</AvatarUrl>
  <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxlarge.png?1691095779</AvatarUrl>
  <AvatarUrl size="xlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xlarge.png?1691095779</AvatarUrl>
  <AvatarUrl size="large">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/large.png?1691095779</AvatarUrl>
  <AvatarUrl size="medium">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/medium.png?1691095779</AvatarUrl>
  <AvatarUrl size="small">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/small.png?1691095779</AvatarUrl>
  <AvatarUrl size="xsmall">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
  <AvatarUrl size="xxsmall">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxsmall.png?1691095779</AvatarUrl>
  <Sponsor>UMBC AI</Sponsor>
  <ThumbnailUrl size="xxlarge">https://assets2-dev.my.umbc.edu/system/shared/thumbnails/news/000/140/902/2c74685cec3a52e4a7092bec7876e18d/xxlarge.jpg?1713184563</ThumbnailUrl>
  <ThumbnailUrl size="xlarge">https://assets4-dev.my.umbc.edu/system/shared/thumbnails/news/000/140/902/2c74685cec3a52e4a7092bec7876e18d/xlarge.jpg?1713184563</ThumbnailUrl>
  <ThumbnailUrl size="large">https://assets1-dev.my.umbc.edu/system/shared/thumbnails/news/000/140/902/2c74685cec3a52e4a7092bec7876e18d/large.jpg?1713184563</ThumbnailUrl>
  <ThumbnailUrl size="medium">https://assets2-dev.my.umbc.edu/system/shared/thumbnails/news/000/140/902/2c74685cec3a52e4a7092bec7876e18d/medium.jpg?1713184563</ThumbnailUrl>
  <ThumbnailUrl size="small">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/140/902/2c74685cec3a52e4a7092bec7876e18d/small.jpg?1713184563</ThumbnailUrl>
  <ThumbnailUrl size="xsmall">https://assets1-dev.my.umbc.edu/system/shared/thumbnails/news/000/140/902/2c74685cec3a52e4a7092bec7876e18d/xsmall.jpg?1713184563</ThumbnailUrl>
  <ThumbnailUrl size="xxsmall">https://assets4-dev.my.umbc.edu/system/shared/thumbnails/news/000/140/902/2c74685cec3a52e4a7092bec7876e18d/xxsmall.jpg?1713184563</ThumbnailUrl>
  <PawCount>0</PawCount>
  <CommentCount>0</CommentCount>
  <CommentsAllowed>true</CommentsAllowed>
  <PostedAt>Mon, 15 Apr 2024 08:44:49 -0400</PostedAt>
  <EditAt>Tue, 16 Apr 2024 22:06:11 -0400</EditAt>
</NewsItem>
</News>
