<?xml version="1.0"?>
<News hasArchived="false" page="1" pageCount="1" pageSize="10" timestamp="Mon, 20 Apr 2026 17:12:25 -0400" url="https://dev.my.umbc.edu/groups/csee/posts.xml?tag=images">
  <NewsItem contentIssues="true" id="141170" important="false" status="posted" url="https://dev.my.umbc.edu/groups/csee/posts/141170">
  <Title>Talk: visible-thermal images for medical applications, 4/24</Title>
  <Tagline>4-5:15 pm ET, Wed., April 24, 2024 in ENGR 231 and online</Tagline>
  <Body>
    <![CDATA[
    <div class="html-content"><img src="https://ai.umbc.edu/wp-content/uploads/sites/734/2024/04/ordun.jpg" style="max-width: 100%; height: auto;"><div><br></div><div><div><strong>Visible-Thermal Image Registration and Translation for Remote Medical Applications</strong></div><div><br></div><div><strong><a href="https://www.linkedin.com/in/catherine-ordun/" rel="nofollow external" class="bo">Catherine Ordun</a>, Booz Allen Hamilton</strong></div><div><br></div><div><strong>4-5:15 pm ET, Wednesday, April 24, 2024</strong></div><div><strong>UMBC, ENGR 231 and <a href="https://umbc.webex.com/meet/gokhale" rel="nofollow external" class="bo">Webex</a></strong></div><div><br></div><div>Thermal imagery captured in the Long Wave Infrared (LWIR) spectrum has long-played a vital role in thermal physiology. Signs of stress and inflammation which are unseen in the visible spectrum, can be detected in LWIR due to principles of blackbody radiation. As a result, thermal facial imagery provides a unique modality for physiological assessment of states such as chronic pain. In this presentation, I will provide a presentation of my research into image registration to align visible-thermal images that serve as a prerequisite for image- to-image translation using conditional <a href="https://en.wikipedia.org/wiki/Generative_adversarial_network" rel="nofollow external" class="bo">GANs</a> and <a href="https://en.wikipedia.org/wiki/Diffusion_model" rel="nofollow external" class="bo">Diffusion Models</a>. I will share recent work leading research with the National Institutes of Health applying this research in a real-world setting on cancer patients suffering from chronic pain.</div><div><br></div><div><a href="https://www.linkedin.com/in/catherine-ordun/" rel="nofollow external" class="bo">Dr. Catherine Ordun</a> is a Vice President at Booz Allen Hamilton, leading AI Rapid Prototyping and Tech Transfer solutions for mission-critical problems for the Federal Government. She drives AI rapid prototyping to support mission-critical proof-of-concepts across multiple AI domains, in addition to AI tech transfer to support algorithm reuse and consumption. She also leads multimodal AI research supporting the National Cancer Institute for chronic cancer pain detection. Dr. Ordun is a Ph.D. graduate of the UMBC Department of Information Systems advised by Drs. Sanjay Purushotham and Edward Raff, and obtained her bachelors degree from Georgia Tech, masters from Emory, and an MBA from GWU Business School. She also has an appointment at UMBC as Adjunct Research Assistant Professor.</div></div></div>
]]>
  </Body>
  <Summary>Visible-Thermal Image Registration and Translation for Remote Medical Applications     Catherine Ordun, Booz Allen Hamilton     4-5:15 pm ET, Wednesday, April 24, 2024  UMBC, ENGR 231 and Webex...</Summary>
  <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/141170/guest@my.umbc.edu/ee5fc9f58c1f347c5db043b17fa4e6ba/api/pixel</TrackingUrl>
  <Tag>ai</Tag>
  <Tag>computer-vision</Tag>
  <Tag>diffusion-model</Tag>
  <Tag>gan</Tag>
  <Tag>images</Tag>
  <Group token="csee">Computer Science and Electrical Engineering</Group>
  <GroupUrl>https://dev.my.umbc.edu/groups/csee</GroupUrl>
  <AvatarUrl>https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="original">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/original.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="xlarge">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="large">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/large.png?1314043393</AvatarUrl>
  <AvatarUrl size="medium">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/medium.png?1314043393</AvatarUrl>
  <AvatarUrl size="small">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/small.png?1314043393</AvatarUrl>
  <AvatarUrl size="xsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxsmall.png?1314043393</AvatarUrl>
  <Sponsor>Computer Science and Electrical Engineering</Sponsor>
  <PawCount>0</PawCount>
  <CommentCount>0</CommentCount>
  <CommentsAllowed>true</CommentsAllowed>
  <PostedAt>Mon, 22 Apr 2024 09:43:36 -0400</PostedAt>
</NewsItem>
  <NewsItem contentIssues="true" id="140947" important="false" status="posted" url="https://dev.my.umbc.edu/groups/csee/posts/140947">
  <Title>Talk: Learning to Synthesize Images, 4-5:15pm ET, Wed. 4/17</Title>
  <Tagline>Advances in Perception, Prediction &amp; Reasoning seminar</Tagline>
  <Body>
    <![CDATA[
    <div class="html-content"><h4><span><strong>Learning to Synthesize Images </strong></span><span><strong>with Multimodal and Hierarchical </strong></span><span><strong>Inputs</strong></span></h4><h4><strong><a href="https://zharry29.github.io/" rel="nofollow external" class="bo">Yu Zeng</a>, JHU </strong></h4><p><strong>April 17, 2024 4:00 – 5:15 PM</strong></p><p><span><strong>ENGR 231, UMBC or <a href="https://umbc.webex.com/meet/gokhale" rel="nofollow external" class="bo">Webex</a></strong></span></p><div><br></div><br><p><span>In recent years, image synthesis and manipulation has experienced remarkable advancements driven by deep learning algorithms and web-scale data, yet there persists a notable disconnect between the intricate nature of human ideas and the simplistic input structures employed by the existing models. In this talk, I will present our research towards a more natural way for controllable image synthesis inspired by the coarse-to-fine workflow of human artists and the inherently multimodal aspect of human thought processes. We consider the inputs of semantic and visual modality at varying levels of hierarchy. For the semantic modality, we introduce a general framework for modeling semantic inputs of different levels, which includes image-level text prompts and pixel-level label maps as two extremes and brings a series of mid-level regional descriptions with different precision. For the visual modality, we explore the use of low-level and high-level visual inputs aligning with the natural hierarchy of visual processing. Additionally, as the misuse of generated images becomes a societal threat, I will introduce our findings on the trustworthiness of deep generative models in the second part of this talk and potential future research directions.</span></p><br><p><span><strong><a href="https://zharry29.github.io/" rel="nofollow external" class="bo">Yu Zeng</a></strong> is a Ph.D. candidate at Johns Hopkins University advised by Vishal M. Patel. Her research interest lies in computer vision and deep learning. She has focused on two main areas: (1) deep generative models for image synthesis and editing and (2) label-efficient deep learning. By combining these research areas, she aims to bridge human creativity and machine intelligence through user-friendly and socially responsible models while minimizing the need for intensive human supervision. Yuhas collaborated with researchers at NVIDIA and Adobe through internships. Prior to her Ph.D., she worked as a researcher at Tencent Games. Yu’s research has been recognized by the KAUST Rising Stars in AI, and her Ph.D. study has been supported by a JHU Kewei Yang and Grace Xin Fellowship.</span></p></div>
]]>
  </Body>
  <Summary>Learning to Synthesize Images with Multimodal and Hierarchical Inputs  Yu Zeng, JHU   April 17, 2024 4:00 – 5:15 PM  ENGR 231, UMBC or Webex      In recent years, image synthesis and manipulation...</Summary>
  <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/140947/guest@my.umbc.edu/d2902ab5728446da5f501daf9d79bb35/api/pixel</TrackingUrl>
  <Tag>ai</Tag>
  <Tag>images</Tag>
  <Tag>multimodal</Tag>
  <Tag>ppr</Tag>
  <Tag>vision</Tag>
  <Group token="csee">Computer Science and Electrical Engineering</Group>
  <GroupUrl>https://dev.my.umbc.edu/groups/csee</GroupUrl>
  <AvatarUrl>https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="original">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/original.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="xlarge">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="large">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/large.png?1314043393</AvatarUrl>
  <AvatarUrl size="medium">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/medium.png?1314043393</AvatarUrl>
  <AvatarUrl size="small">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/small.png?1314043393</AvatarUrl>
  <AvatarUrl size="xsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxsmall.png?1314043393</AvatarUrl>
  <Sponsor>Computer Science and Electrical Engineering</Sponsor>
  <ThumbnailUrl size="xxlarge">https://assets4-dev.my.umbc.edu/system/shared/thumbnails/news/000/140/947/80114b749f795158bf424e6568f1d2c0/xxlarge.jpg?1713206571</ThumbnailUrl>
  <ThumbnailUrl size="xlarge">https://assets1-dev.my.umbc.edu/system/shared/thumbnails/news/000/140/947/80114b749f795158bf424e6568f1d2c0/xlarge.jpg?1713206571</ThumbnailUrl>
  <ThumbnailUrl size="large">https://assets1-dev.my.umbc.edu/system/shared/thumbnails/news/000/140/947/80114b749f795158bf424e6568f1d2c0/large.jpg?1713206571</ThumbnailUrl>
  <ThumbnailUrl size="medium">https://assets1-dev.my.umbc.edu/system/shared/thumbnails/news/000/140/947/80114b749f795158bf424e6568f1d2c0/medium.jpg?1713206571</ThumbnailUrl>
  <ThumbnailUrl size="small">https://assets2-dev.my.umbc.edu/system/shared/thumbnails/news/000/140/947/80114b749f795158bf424e6568f1d2c0/small.jpg?1713206571</ThumbnailUrl>
  <ThumbnailUrl size="xsmall">https://assets1-dev.my.umbc.edu/system/shared/thumbnails/news/000/140/947/80114b749f795158bf424e6568f1d2c0/xsmall.jpg?1713206571</ThumbnailUrl>
  <ThumbnailUrl size="xxsmall">https://assets4-dev.my.umbc.edu/system/shared/thumbnails/news/000/140/947/80114b749f795158bf424e6568f1d2c0/xxsmall.jpg?1713206571</ThumbnailUrl>
  <PawCount>0</PawCount>
  <CommentCount>0</CommentCount>
  <CommentsAllowed>true</CommentsAllowed>
  <PostedAt>Mon, 15 Apr 2024 14:46:13 -0400</PostedAt>
</NewsItem>
</News>
