<?xml version="1.0"?>
<News hasArchived="false" page="12" pageCount="206" pageSize="10" timestamp="Thu, 07 May 2026 15:54:28 -0400" url="https://dev.my.umbc.edu/groups/csee/posts.xml?mode=pawpularity&amp;page=12">
  <NewsItem contentIssues="true" id="137739" important="false" status="posted" url="https://dev.my.umbc.edu/groups/csee/posts/137739">
  <Title>Apply: Spring '24 AI4ALL Ignite program</Title>
  <Tagline>Early application deadline is Friday, December 15</Tagline>
  <Body>
    <![CDATA[
    <div class="html-content">
    <a href="https://ai-4-all.org/ai4all-ignite/" rel="nofollow external" class="bo"><img src="https://www.csee.umbc.edu/wp-content/uploads/sites/659/2023/12/ai4all_ignite.png" style="max-width: 100%; height: auto;"></a><div>
    <br><div>
    <strong>What is <a href="https://ai-4-all.org/ai4all-ignite/" rel="nofollow external" class="bo"><span>AI4ALL</span> Ignite</a>? </strong><span>The </span><span>AI4ALL</span><span> Ignite program gives <strong>undergraduates</strong> the opportunity to </span><br><ul>
    <li>Work on an <strong>AI portfolio project</strong> with mentorship and guidance from AI industry experts</li>
    <li>Present their AI portfolio project in a student symposium and network with AI industry professionals.</li>
    <li>Participate in extensive and practical training in <strong>career readiness</strong> and technical AI internship interviews</li>
    <li>Train in mock AI technical interviews and network in opportunity chats with AI recruiters</li>
    </ul>
    </div>
    <div>AI4ALL Ignite  offers a no-cost and virtual groundbreaking opportunity for <strong>undergraduate students</strong> interested in Artificial Intelligence. This AI career accelerator is designed to prepare you to interview for technical AI internships and has direct networking with AI industry professionals and recruiters.  </div>
    <div><br></div>
    <div><div>As a UMBC student, your application will be prioritized. <span>We recommend that you apply early (by Friday, 12/15) to the program accelerator. Admissions are made on a rolling basis and there are limited spots available. AI4ALL may close the application early if they fill the program.  </span>
    </div></div>
    <div><br></div>
    <div>
    <strong>Get more information and apply <a href="https://ai-4-all.org/ai4all-ignite/" rel="nofollow external" class="bo">here</a>.</strong><br><br>Email further questions to <a href="mailto:ai4all@cs.umbc.edu">ai4all@cs.umbc.edu</a>.</div>
    </div>
    </div>
]]>
  </Body>
  <Summary>What is AI4ALL Ignite? The AI4ALL Ignite program gives undergraduates the opportunity to    Work on an AI portfolio project with mentorship and guidance from AI industry experts  Present their AI...</Summary>
  <Website>https://ai-4-all.org/ai4all-ignite/</Website>
  <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/137739/guest@my.umbc.edu/2ebd2db1a7f7c0c049bf800a66463f49/api/pixel</TrackingUrl>
  <Tag>ai</Tag>
  <Tag>ai4all</Tag>
  <Group token="csee">Computer Science and Electrical Engineering</Group>
  <GroupUrl>https://dev.my.umbc.edu/groups/csee</GroupUrl>
  <AvatarUrl>https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="original">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/original.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="xlarge">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="large">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/large.png?1314043393</AvatarUrl>
  <AvatarUrl size="medium">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/medium.png?1314043393</AvatarUrl>
  <AvatarUrl size="small">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/small.png?1314043393</AvatarUrl>
  <AvatarUrl size="xsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxsmall.png?1314043393</AvatarUrl>
  <Sponsor>Computer Science and Electrical Engineering</Sponsor>
  <PawCount>0</PawCount>
  <CommentCount>0</CommentCount>
  <CommentsAllowed>true</CommentsAllowed>
  <PostedAt>Wed, 13 Dec 2023 11:37:05 -0500</PostedAt>
  <EditAt>Wed, 13 Dec 2023 12:05:56 -0500</EditAt>
</NewsItem>
  <NewsItem contentIssues="false" id="137544" important="false" status="posted" url="https://dev.my.umbc.edu/groups/csee/posts/137544">
  <Title>PhD Defense: Mehdi Rezaee, 3-5pm Tue 12/5, ITE325b &amp; online</Title>
  <Body>
    <![CDATA[
    <div class="html-content">
    <h5><span>Ph.D. Defense</span></h5>
    <div>
    <div><br></div>
    <h4>From Latent Knowledge Gathering to Side Information Injection in Discrete Sequential Models</h4>
    <div><br></div>
    <h5>Mehdi Rezaee</h5>
    <div><br></div>
    <h5>Tue. Dec. 5, 2023, 3-5 pm ET, ITE 325B and <a href="https://meet.google.com/hzy-eidf-ujf" rel="nofollow external" class="bo">online </a>
    </h5>
    <div><br></div>
    <div>
    <strong>Committee:</strong> Drs. Frank Ferraro (Chair),  Seung Jun Kim, Tim</div>
    <div>Oates, Cynthia Matuszek, and Niranjan Balasubramanian (Stony Brook Univ.)</div>
    <div><br></div>
    <div>Representation learning aims at extracting relevant information from data to represent input in a way that is sufficient for performing a task. Specifically, this problem is difficult when the data under consideration is both sequential and discrete such as in natural language processing (NLP). From classical methods like topic modeling to modern transformer-based architectures one seeks to utilize the available information from data or transferable knowledge to learn richer representations. To that end, recent advances in current state-of-the-art models rely on two major strategies. a) Latent Knowledge Gathering , where we encourage a model to recognize semantic and thematically-relevant knowledge contained within the training data. Methods include clustering techniques like topic modeling and document classification. b) Injecting Background Information, where the goal is to exploit structural or representational priors, such as pretrained models or word embeddings to facilitate the training phase. Irrespective of the architecture or task, the training process invariably begins with the encoding of high-dimensional documents into more manageable, low-dimensional latent representations. We advocate for these representations to be optimized to capture and utilize more pertinent information, enhancing their efficacy in various language-based tasks. Considering document classification as an example of semantic analysis, both the encoder and decoder are vital in extracting essential information from inputs, especially when dealing with limited training data. Our extensive experiments assess the capabilities of models across various data regimes, highlighting the importance of efficient representation in handling the situation entity classification task. In thematic analysis, despite notable advancements, many previous studies have overlooked the extraction of valuable word-level information, such as latent thematic topics pertinent to each word. Additionally, the use of auxiliary knowledge has often been confined to basic applications like weight initialization. Some methods have simplified the process by merely appending external knowledge to the input. Nonetheless, the effective utilization of information whether derived directly from the data or leveraged from background knowledge remains a critical factor in document representation. It is essential to ensure that the process of information gathering does not compromise the richness of the original data.</div>
    <div><br></div>
    <div>First, we offer a novel lightweight unsupervised design that shows how to use topic models in conjunction with recurrent neural networks (RNNs) with minimal word-level information loss.  Our approach maintains and uses lower-level representations that previous approaches had discarded, and then it gathers and provides that information to a natural language generation model. We conduct extensive experiments to compare the efficiency of the proposed model with previous proposed architectures. The results demonstrate that retaining and exploiting word topic assignments, previously overlooked, leads to new state-of-the-art performance in thematic analysis.</div>
    <div><br></div>
    <div>Second, we consider how background (or side) knowledge can be used to guide model and representation learning of text. This side knowledge can itself be structured, and may often be given categorically. However, the sources of side knowledge can be incomplete, meaning that the side knowledge may be structured, but partially observed. This poses challenges for learning. To handle this, we first focus on incomplete partially observed side knowledge. We propose using a structured, discrete, semi-supervised variational autoencoder framework, which uses provided side knowledge to represent the original input text. This method is intricately designed to use the partially observed knowledge as a guiding tool, without imposing limitations on the training phase. We show that our approach can robustly handle varying levels of side knowledge observation, and leads to consistent performance gains across multiple language modeling and classification metrics.</div>
    <div><br></div>
    <div>Ultimately, we delve into scenarios where side knowledge is not just incomplete but also contains noise. In this context, we introduce a universal framework for integrating discrete information, based on the information bottleneck principle. This framework involves a thorough theoretical exploration of how side information can be integrated into model parameters. Our extensive theoretical analysis and empirical studies, including a case study on event modeling, show that our approach not only extends and refines previous methods but also significantly enhances performance. The proposed framework lays a robust theoretical groundwork for future research in this domain.</div>
    </div>
    <div><br></div>
    </div>
]]>
  </Body>
  <Summary>Ph.D. Defense      From Latent Knowledge Gathering to Side Information Injection in Discrete Sequential Models     Mehdi Rezaee     Tue. Dec. 5, 2023, 3-5 pm ET, ITE 325B and online ...</Summary>
  <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/137544/guest@my.umbc.edu/63fd2bdc002dd19e7d5e6cd94c8bae0f/api/pixel</TrackingUrl>
  <Tag>student</Tag>
  <Group token="csee">Computer Science and Electrical Engineering</Group>
  <GroupUrl>https://dev.my.umbc.edu/groups/csee</GroupUrl>
  <AvatarUrl>https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="original">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/original.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="xlarge">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="large">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/large.png?1314043393</AvatarUrl>
  <AvatarUrl size="medium">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/medium.png?1314043393</AvatarUrl>
  <AvatarUrl size="small">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/small.png?1314043393</AvatarUrl>
  <AvatarUrl size="xsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxsmall.png?1314043393</AvatarUrl>
  <Sponsor>Computer Science and Electrical Engineering</Sponsor>
  <PawCount>0</PawCount>
  <CommentCount>0</CommentCount>
  <CommentsAllowed>true</CommentsAllowed>
  <PostedAt>Tue, 05 Dec 2023 11:56:21 -0500</PostedAt>
  <EditAt>Thu, 14 Dec 2023 12:28:02 -0500</EditAt>
</NewsItem>
  <NewsItem contentIssues="false" id="137496" important="false" status="posted" url="https://dev.my.umbc.edu/groups/csee/posts/137496">
  <Title>Talk: Advancing Multimodal Retrieval &amp; Generation, 4 pm 12/4</Title>
  <Tagline>From General to Biomedical Domains</Tagline>
  <Body>
    <![CDATA[
    <div class="html-content">
    <h3><strong>Advancing Multimodal Retrieval and Generation: From General to Biomedical Domains</strong></h3>
    <h3><strong><br></strong></h3>
    <h5>
    <span>Dr. Man Luo, </span><span>Postdoctoral Research Fellow, Mayo Clinic</span>
    </h5>
    <h5>
    <span>Monday, Dec. 4, 2023, 4:00pm ET, via <a href="https://umbc.webex.com/meet/gokhale" rel="nofollow external" class="bo">Webex</a> and in ENGR 231</span><strong> </strong>
    </h5>
    <div><br></div>
    <div>
    <strong>Abstract:</strong><span>  This talk explores advancements in multimodal retrieval and generation across general and biomedical domains. The first work introduces a multimodal retriever and reader pipeline for vision-based question answering, using image-text queries to retrieve and interpret relevant textual knowledge. The second work simplifies this approach with an efficient end-to-end retrieval model, removing dependencies on intermediate models like object detectors. The final part presents a biomedical-focused multimodal generation model, capable of classifying and explaining labels in images with text prompts. Together, these works demonstrate significant progress in integrating visual and textual data processing in diverse applications.</span><br><br><strong>Bio:</strong><span>  <a href="https://www.linkedin.com/in/man-luo-a7aa57178/" rel="nofollow external" class="bo"><strong>Dr Man Luo</strong></a> is a Postdoctoral Research Fellow at Mayo Clinic with Dr. Imon Banerjee and Dr. Bhavik Patel. Her research is at the intersection of information retrieval and reading comprehension within natural language processing (NLP) and multimodal domains, to retrieve and utilize external knowledge with efficiency and generalization. Currently she is interested in knowledge retrieval, multimodal understanding, and applications of LLMs and VLMs in biomedical/healthcare applications. She earned her Ph.D. in 2023 from Arizona State University advised by Dr. Chitta Baral, and has collaborated with industrial research labs at Salesforce, Meta, and Google.</span>
    </div>
    </div>
]]>
  </Body>
  <Summary>Advancing Multimodal Retrieval and Generation: From General to Biomedical Domains     Dr. Man Luo, Postdoctoral Research Fellow, Mayo Clinic  Monday, Dec. 4, 2023, 4:00pm ET, via Webex and in ENGR...</Summary>
  <Website>https://www.tejasgokhale.com/seminar.html</Website>
  <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/137496/guest@my.umbc.edu/fb23b1c6279ce6229a70874132df3d08/api/pixel</TrackingUrl>
  <Group token="csee">Computer Science and Electrical Engineering</Group>
  <GroupUrl>https://dev.my.umbc.edu/groups/csee</GroupUrl>
  <AvatarUrl>https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="original">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/original.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="xlarge">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="large">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/large.png?1314043393</AvatarUrl>
  <AvatarUrl size="medium">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/medium.png?1314043393</AvatarUrl>
  <AvatarUrl size="small">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/small.png?1314043393</AvatarUrl>
  <AvatarUrl size="xsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxsmall.png?1314043393</AvatarUrl>
  <Sponsor>Computer Science and Electrical Engineering</Sponsor>
  <PawCount>1</PawCount>
  <CommentCount>0</CommentCount>
  <CommentsAllowed>true</CommentsAllowed>
  <PostedAt>Mon, 04 Dec 2023 11:12:39 -0500</PostedAt>
  <EditAt>Mon, 04 Dec 2023 11:18:22 -0500</EditAt>
</NewsItem>
  <NewsItem contentIssues="false" id="137348" important="false" status="posted" url="https://dev.my.umbc.edu/groups/csee/posts/137348">
  <Title>Talk: Making Machine Learning Models Safer, 4pm Wed 11/29</Title>
  <Tagline>Data and Model Perspectives</Tagline>
  <Body>
    <![CDATA[
    <div class="html-content">
    <h4><span>Making Machine Learning Models Safer: Data and Model Perspectives</span></h4>
    <div><br></div>
    <h5>Dr. Kowshik Thopalli, Lawrence Livermore National Laboratory</h5>
    <div><br></div>
    <h5>4:00-5:15pm Wed, Nov 29, ENGR 231 and <a href="https://umbc.webex.com/meet/gokhale" rel="nofollow external" class="bo">WebEx</a>
    </h5>
    <div><br></div>
    <div>As machine learning systems are increasingly deployed in real-world settings like healthcare, finance, and scientific applications, ensuring their safety and reliability is crucial. However, many state-of-the-art ML models still suffer from issues like poor out-of-distribution generalization, sensitivity to input corruptions, requiring large amounts of data, and inadequate calibration - limiting their robustness and trustworthiness for critical real-world applications.  In this talk, I will first present a broad overview of different safety considerations for modern ML systems. I will then proceed to discuss our recent efforts in making ML models safer from two complementary perspectives - (i) manipulating data and (ii) enriching the model capabilities by developing novel training mechanisms.  I will discuss our work on designing new data augmentation techniques for object detection followed by demonstrating how, in the absence of data from desired target domains of interest, one could leverage pre-trained generative models for efficient synthetic data generation.  Next, I will present a new paradigm of training deep networks called model anchoring and show how one could achieve similar properties to an ensemble but through a single model. I will specifically discuss how model anchoring can significantly enrich the class of hypothesis functions being sampled and demonstrate its effectiveness through its improved performance on several safety benchmarks. I will conclude by highlighting exciting future research directions for producing robust ML models through leveraging multi-modal foundation models.</div>
    <div><br></div>
    <div>
    <strong><a href="https://www.linkedin.com/in/kowshik-thopalli/" rel="nofollow external" class="bo">Kowshik Thopalli </a></strong>is a Machine Learning Scientist and a post-doctoral researcher at Lawrence Livermore National Laboratory. His research focuses on developing reliable machine learning models that are robust under distribution shifts. He has published papers on a variety of techniques to address model robustness, including domain adaptation, domain generalization, and test-time adaptation using geometric and meta-learning approaches. His expertise also encompasses integrating diverse knowledge sources, such as domain expert guidance and generative models, to improve model data efficiency, accuracy, and resilience to distribution shifts.  He received his Ph.D. in 2023 from Arizona State University.</div>
    </div>
]]>
  </Body>
  <Summary>Making Machine Learning Models Safer: Data and Model Perspectives     Dr. Kowshik Thopalli, Lawrence Livermore National Laboratory     4:00-5:15pm Wed, Nov 29, ENGR 231 and WebEx     As machine...</Summary>
  <Website>https://www.tejasgokhale.com/seminar.html</Website>
  <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/137348/guest@my.umbc.edu/fb1f95eb6777b7c0b2de6de52805c12d/api/pixel</TrackingUrl>
  <Tag>ai</Tag>
  <Tag>machine-learning</Tag>
  <Tag>safety</Tag>
  <Group token="csee">Computer Science and Electrical Engineering</Group>
  <GroupUrl>https://dev.my.umbc.edu/groups/csee</GroupUrl>
  <AvatarUrl>https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="original">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/original.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="xlarge">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="large">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/large.png?1314043393</AvatarUrl>
  <AvatarUrl size="medium">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/medium.png?1314043393</AvatarUrl>
  <AvatarUrl size="small">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/small.png?1314043393</AvatarUrl>
  <AvatarUrl size="xsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxsmall.png?1314043393</AvatarUrl>
  <Sponsor>Computer Science and Electrical Engineering</Sponsor>
  <PawCount>1</PawCount>
  <CommentCount>0</CommentCount>
  <CommentsAllowed>true</CommentsAllowed>
  <PostedAt>Tue, 28 Nov 2023 17:26:53 -0500</PostedAt>
</NewsItem>
  <NewsItem contentIssues="false" id="137340" important="false" status="posted" url="https://dev.my.umbc.edu/groups/csee/posts/137340">
  <Title>Talk: Binding Crypto Context in Network Protocols 12pm 12/1</Title>
  <Body>
    <![CDATA[
    <div class="html-content">
    <h5><span>The UMBC Cyber Defense Lab presents</span></h5>
    <div><br></div>
    <div>
    <h4>Automatically Binding Cryptographic Context to Messages in Network Protocols Using Formal Methods</h4>
    <div><br></div>
    <h5>Enis Golaszewski<br>UMBC CSEE Department</h5>
    <div><br></div>
    <h5>12-1 pm, Friday, 1 Dec. 2023 via <a href="https://umbc.webex.com/meet/sherman" rel="nofollow external" class="bo">WebEx</a>
    </h5>
    <div><br></div>
    <div>We present an automatic tool for binding formal network protocol specifications to their underlying cryptographic contexts, eliminating harmful protocol interactions, including Man-in-the-Middle (MitM) attacks. Operating in the strand space model, our tool takes as input an arbitrary two-party protocol specification, infers a cryptographic context from the protocol terms, and outputs a specification for an improved protocol that is the composition of the input protocol and our novel context-exchange protocol.  Our context-exchange protocol binds cryptographic values to a unique session, using a Merkle hash tree to represent context.  Our tool applies the following operations on context: initialize, append, sign, and verify. For each input protocol specification, our tool outputs context-equivalence security goals, which we then verify using the Cryptographic Protocol Shapes Analyzer (CPSA). To our knowledge, our tool is the first of its kind. It represents a significant step towards eliminating attacks resulting from unwanted protocol interactions, which are the cause for most known structural weaknesses in protocols. Support for this research was provided in part by the National Security Agency under an INSuRE+C grant via Northeastern University. </div>
    <div><br></div>
    <div>
    <a href="https://www.linkedin.com/in/ennis-golaszewski-88742179/" rel="nofollow external" class="bo">Enis Golaszewski</a> (<a href="mailto:golaszewski@umbc.edu">golaszewski@umbc.edu</a>) is a computer science PhD student at UMBC under Alan T. Sherman, where he studies, researches, and teaches cryptographic protocol analysis.</div>
    <div><br></div>
    <div>Host: Alan T. Sherman, <a href="mailto:sherman@umbc.edu">sherman@umbc.edu</a>; January 16-19, 2024, UMBC SFS/CySP Research Study; Support for this event was provided in part by the National Science Foundation under SFS grant DGE-1753681.</div>
    </div>
    </div>
]]>
  </Body>
  <Summary>The UMBC Cyber Defense Lab presents      Automatically Binding Cryptographic Context to Messages in Network Protocols Using Formal Methods     Enis Golaszewski UMBC CSEE Department     12-1 pm,...</Summary>
  <Website>https://cisa.umbc.edu/</Website>
  <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/137340/guest@my.umbc.edu/915c089754153467f3eb70d5db482cf5/api/pixel</TrackingUrl>
  <Tag>cybersecurity</Tag>
  <Group token="csee">Computer Science and Electrical Engineering</Group>
  <GroupUrl>https://dev.my.umbc.edu/groups/csee</GroupUrl>
  <AvatarUrl>https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="original">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/original.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="xlarge">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="large">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/large.png?1314043393</AvatarUrl>
  <AvatarUrl size="medium">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/medium.png?1314043393</AvatarUrl>
  <AvatarUrl size="small">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/small.png?1314043393</AvatarUrl>
  <AvatarUrl size="xsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxsmall.png?1314043393</AvatarUrl>
  <Sponsor>Computer Science and Electrical Engineering</Sponsor>
  <PawCount>0</PawCount>
  <CommentCount>0</CommentCount>
  <CommentsAllowed>true</CommentsAllowed>
  <PostedAt>Tue, 28 Nov 2023 13:08:50 -0500</PostedAt>
  <EditAt>Tue, 28 Nov 2023 13:15:26 -0500</EditAt>
</NewsItem>
  <NewsItem contentIssues="true" id="137288" important="false" status="posted" url="https://dev.my.umbc.edu/groups/csee/posts/137288">
  <Title>Talk: Learning Actions from Humans in Video, 4pm Mon. Nov 27</Title>
  <Tagline>Modeling &amp; understanding actions is key for computer vision</Tagline>
  <Body>
    <![CDATA[
    <div class="html-content">
    <img src="https://www.csee.umbc.edu/wp-content/uploads/sites/659/2023/11/Picture1.png" style="max-width: 100%; height: auto;"><div><span><hr>
    <strong><br></strong></span></div>
    <div><span><strong>Advances in Perception, Prediction, and Reasoning</strong></span></div>
    <div>
    <div><br></div>
    <h4>Learning Actions from Humans in Video</h4>
    <div><br></div>
    <h5>4:00-5:15pm ET, Monday, Nov 27, 2023<br>UMBC, Engineering 231 and via <a href="https://umbc.webex.com/meet/gokhale" rel="nofollow external" class="bo">WebEx</a>
    </h5>
    <div><br></div>
    <h5>
    <a href="https://www.linkedin.com/in/eadom-dessalene-41b08b1b4/" rel="nofollow external" class="bo">Eadom Dessalene</a> <br>University of Maryland, College Park</h5>
    <div><br></div>
    <div>The prevalent computer vision paradigm in the realm of action understanding is to directly transfer advances in object recognition toward action understanding. In this presentation, I discuss the motivations for an alternative embodied approach centered around the modeling of actions rather than objects and survey recent work of ours along these lines, as well as promising possible future directions.</div>
    <div><br></div>
    <div>
    <a href="https://www.linkedin.com/in/eadom-dessalene-41b08b1b4/" rel="nofollow external" class="bo"><strong>Eadom Dessalene</strong></a> is a Ph.D. Candidate at the University of Maryland, College Park, advised by Yiannis Aloimonos and Cornelia Fermuller in the Perception and Robotics Group. Eadom received his bachelor's degree in Computer Science from George Mason University. He has made several important contributions to research on video understanding, ego-centric vision, and action understanding through publications in CVPR, ICLR, T-PAMI, and ICRA, as well as winning first place in the <a href="https://www.cs.umd.edu/article/2020/07/cs-team-wins-epic-kitchen-action-anticipation-challenge" rel="nofollow external" class="bo">2020 EPIC Kitchens Action Anticipation Challenge</a>.</div>
    <div><br></div>
    <div>The <a href="https://www.tejasgokhale.com/seminar.html" rel="nofollow external" class="bo">Advances in Perception, Prediction, and Reasoning </a>(PPR) talks are organized and hosted by UMBC Professor <a href="https://www.tejasgokhale.com/" rel="nofollow external" class="bo">Tejas Gokhale</a>.</div>
    </div>
    <div><br></div>
    </div>
]]>
  </Body>
  <Summary>Advances in Perception, Prediction, and Reasoning      Learning Actions from Humans in Video     4:00-5:15pm ET, Monday, Nov 27, 2023 UMBC, Engineering 231 and via WebEx     Eadom Dessalene ...</Summary>
  <Website>https://www.tejasgokhale.com/seminar.html</Website>
  <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/137288/guest@my.umbc.edu/bff3b3da5a23f897e487d43b3f3dc333/api/pixel</TrackingUrl>
  <Tag>actions</Tag>
  <Tag>ai</Tag>
  <Tag>computer-vision</Tag>
  <Tag>talk</Tag>
  <Group token="csee">Computer Science and Electrical Engineering</Group>
  <GroupUrl>https://dev.my.umbc.edu/groups/csee</GroupUrl>
  <AvatarUrl>https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="original">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/original.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="xlarge">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="large">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/large.png?1314043393</AvatarUrl>
  <AvatarUrl size="medium">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/medium.png?1314043393</AvatarUrl>
  <AvatarUrl size="small">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/small.png?1314043393</AvatarUrl>
  <AvatarUrl size="xsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxsmall.png?1314043393</AvatarUrl>
  <Sponsor>Computer Science and Electrical Engineering</Sponsor>
  <PawCount>1</PawCount>
  <CommentCount>0</CommentCount>
  <CommentsAllowed>true</CommentsAllowed>
  <PostedAt>Sun, 26 Nov 2023 19:29:10 -0500</PostedAt>
</NewsItem>
  <NewsItem contentIssues="true" id="137240" important="false" status="posted" url="https://dev.my.umbc.edu/groups/csee/posts/137240">
    <Title>CMSC Alum and CyberDawg wins Marine Corps Marathon 50K</Title>
    <Tagline>CMSC Alum and CyberDawg wins Marine Corps Marathon</Tagline>
    <Body>
      <![CDATA[
          <div class="html-content">
          <div>
          <p><br></p>
          <p><span><img src="https://www.csee.umbc.edu/wp-content/uploads/sites/659/2023/11/UMBC_Marine2-scaled.jpg" style="max-width: 100%; height: auto;"></span></p>
          <p><span><br></span></p>
          <p><span>CMSC Alum and former
          CyberDawg</span><span>,</span><strong><span> Anna Staats '21 crushes </span></strong><span>the 48<sup>th</sup> Marine Corps Marathon 50K</span><strong><span>, claiming victory </span></strong><span>in a remarkable time of 03:35:56</span><strong><span>!</span></strong></p>
          <p><br></p>
          <p><span><br></span></p>
          <p><strong><span><br></span></strong></p>
          <p><br></p>
          </div>
          <div><br></div>
          </div>
      ]]>
    </Body>
    <Summary>
      CMSC Alum and former
      CyberDawg, Anna Staats '21 crushes the 48th Marine Corps Marathon 50K, claiming victory in a remarkable time of 03:35:56!
    </Summary>
    <Website>https://wtop.com/dc/2023/10/runners-take-on-48th-annual-marine-corps-marathon/</Website>
    <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/137240/guest@my.umbc.edu/36dd49e19a46b7219c20ad779a3b6e2b/api/pixel</TrackingUrl>
    <Tag>cmsc</Tag>
    <Tag>cyberdawg</Tag>
    <Tag>marinecorpsmarathon</Tag>
    <Tag>runwiththemarines</Tag>
    <Tag>usmc</Tag>
    <Group token="csee">Computer Science and Electrical Engineering</Group>
    <GroupUrl>https://dev.my.umbc.edu/groups/csee</GroupUrl>
    <AvatarUrl>https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
    <AvatarUrl size="original">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/original.png?1314043393</AvatarUrl>
    <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxlarge.png?1314043393</AvatarUrl>
    <AvatarUrl size="xlarge">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xlarge.png?1314043393</AvatarUrl>
    <AvatarUrl size="large">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/large.png?1314043393</AvatarUrl>
    <AvatarUrl size="medium">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/medium.png?1314043393</AvatarUrl>
    <AvatarUrl size="small">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/small.png?1314043393</AvatarUrl>
    <AvatarUrl size="xsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
    <AvatarUrl size="xxsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxsmall.png?1314043393</AvatarUrl>
    <Sponsor>Computer Science and Electrical Engineering</Sponsor>
    <PawCount>3</PawCount>
    <CommentCount>1</CommentCount>
    <CommentsAllowed>true</CommentsAllowed>
    <PostedAt>Tue, 21 Nov 2023 11:51:35 -0500</PostedAt>
    <EditAt>Fri, 24 Nov 2023 12:04:54 -0500</EditAt>
  </NewsItem>
  <NewsItem contentIssues="false" id="137241" important="false" status="posted" url="https://dev.my.umbc.edu/groups/csee/posts/137241">
  <Title>Talks on cybersecurity and transportation 8-9am ET Tue 11/28</Title>
  <Tagline>Part of the INCS-CoE Expert Community Seminar series</Tagline>
  <Body>
    <![CDATA[
    <div class="html-content">
    <h4>
    <strong><a href="https://incs-coe.org/" rel="nofollow external" class="bo">INCS-CoE</a> Expert Community </strong><strong>Seminar on </strong><strong>Cybersecurity Issues in Transportation</strong>
    </h4>
    <p>8-9:00 am ET Tuesday, 28 November 2023 via <a href="https://hal.zoom.us/j/96830852382?pwd=SHNHU3hDTWpMSDhheWQxRmpuRldUQT09" rel="nofollow external" class="bo"><strong>Zoom</strong></a></p>
    <p><strong>Host: <a href="https://research.umbc.edu/steiner/" rel="nofollow external" class="bo">Dr. Karl V. Steiner</a></strong>, UMBC VP for Research &amp; Creative Achievement</p>
    <hr>
    
    <p><strong>GraphCAN: Graph-Based Controller Area Network Security</strong></p>
    <p><strong><a href="https://sites.google.com/view/umbc-vlsi-soc/home" rel="nofollow external" class="bo">Dr. Riadul Islam</a></strong>, UMBC, Baltimore, MD, USA</p>
    <p>Vulnerabilities and security threats associated with the widely adopted vehicular Controller Area Network (CAN) will be examined. Novel techniques for the creation of graphs from CAN data will be introduced. Subsequently, various methods, encompassing statistical analyses, machine learning algorithms, and graph neural network approaches, will be presented as potential means to enhance the security of CAN networks. Furthermore, the challenges related to processing extensive sensor data within a stringent timing budget will be addressed, emphasizing the significance of implementing intrusion detection algorithms on edge devices with a focus on cost-effectiveness.</p>
    <hr>
    <p><strong>Advancing Cyber-Resilience in the Age of Autonomous Vehicles </strong></p>
    <p><strong><a href="https://www.dcc.fc.up.pt/~rmartins/" rel="nofollow external" class="bo">Dr. Rolando Martins</a></strong>, University of Porto, Portugal</p>
    <p>The adoption of autonomous vehicles requires shifting cyber-physical infrastructures. They are high-value targets, and while Zero Trust is vital, it is insufficient against modern cyber threats. The traditional firewall-based "fortress" approach falls short against advanced adversaries. This situation has reignited interest in Intrusion Tolerance, underutilized since the '90s due to its complexity. We will showcase work at the UP Cybersecurity and Privacy Centre (C3P)'s on cybersecurity, focusing on autonomous vehicles.</p>
    <hr>
    <p><strong>From Skyjacking to Carjacking: Challenges and Opportunities in Securing Modern Navigation Technologies</strong></p>
    <p><strong><a href="https://www.aanjhan.com/" rel="nofollow external" class="bo">Dr. Aanjhan Ranganathan</a></strong>, Northeastern University, Boston, MA, USA</p>
    <p>Modern transportation systems rely heavily on accurate positioning and navigation technologies, which have become increasingly vulnerable to security threats. In this talk, we will explore the security challenges associated with secure positioning and navigation in modern vehicles, including the impact of GPS spoofing on unmanned aerial vehicles (UAVs) and the security problems of instrument landing systems used in aviation as one of the primary means of navigation aid for landing. We will also discuss the security problems of automotive radar, where we will show how easily radio frequency radar signals can be manipulated to fake distances and velocities, compromising the safety of the vehicle and passengers. We will see how even with cryptographic primitives, the challenges to securing positioning, navigation, and timing technologies is no trivial task. The talk will aim to highlight the fundamental limits that exist in securing current technologies and a call for designing secure alternatives.</p>
    </div>
]]>
  </Body>
  <Summary>INCS-CoE Expert Community Seminar on Cybersecurity Issues in Transportation   8-9:00 am ET Tuesday, 28 November 2023 via Zoom   Host: Dr. Karl V. Steiner, UMBC VP for Research &amp; Creative...</Summary>
  <Website>https://incs-coe.org/research/</Website>
  <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/137241/guest@my.umbc.edu/41c26721bacf27fc7e21bb3b37516064/api/pixel</TrackingUrl>
  <Tag>cybersecurity</Tag>
  <Tag>incs-coe</Tag>
  <Tag>transportation</Tag>
  <Group token="csee">Computer Science and Electrical Engineering</Group>
  <GroupUrl>https://dev.my.umbc.edu/groups/csee</GroupUrl>
  <AvatarUrl>https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="original">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/original.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="xlarge">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="large">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/large.png?1314043393</AvatarUrl>
  <AvatarUrl size="medium">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/medium.png?1314043393</AvatarUrl>
  <AvatarUrl size="small">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/small.png?1314043393</AvatarUrl>
  <AvatarUrl size="xsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxsmall.png?1314043393</AvatarUrl>
  <Sponsor>Computer Science and Electrical Engineering</Sponsor>
  <PawCount>0</PawCount>
  <CommentCount>0</CommentCount>
  <CommentsAllowed>true</CommentsAllowed>
  <PostedAt>Tue, 21 Nov 2023 11:49:03 -0500</PostedAt>
  <EditAt>Tue, 21 Nov 2023 11:50:35 -0500</EditAt>
</NewsItem>
  <NewsItem contentIssues="true" id="137184" important="false" status="posted" url="https://dev.my.umbc.edu/groups/csee/posts/137184">
  <Title>Professor Cynthia Matuszek on Talking to Robots</Title>
  <Tagline>Research by UMBC's Interactive Robotics and Language Lab</Tagline>
  <Body>
    <![CDATA[
    <div class="html-content">
    <div><a href="https://www.youtube.com/watch?v=VhqmyYYbov4" rel="nofollow external" class="bo"><img src="https://www.csee.umbc.edu/wp-content/uploads/sites/659/2023/11/cmat3.png" style="max-width: 100%; height: auto;"></a></div>
    <div><br></div>
    <div>
    <div>CSEE Professor <a href="https://redirect.cs.umbc.edu/~cmat/" rel="nofollow external" class="bo"><strong>Cynthia Matuszek</strong></a> gave a talk at <a href="https://research.umbc.edu/grit-x/" rel="nofollow external" class="bo"><strong>UMBC's 2023 GRIT-X</strong></a> event on the need for physical robotic assistants to be able to understand and use human languages. Watch her 11-minute talk on <a href="https://www.youtube.com/watch?v=VhqmyYYbov4" rel="nofollow external" class="bo"><strong>YouTube</strong></a>.</div>
    <div><br></div>
    <div>As robots become more common and begin to make their way into human environments, it becomes more important for them to interact comfortably with end users. One way to accomplish that is to build robotic systems that can use natural languages (human languages, such as English) to interact with and learn from people around them. In her presentation, she described the concept of grounded language, language that robots can use to understand the physical world around them, and discussed the promise, as well as some of the risks, of language-using robots.</div>
    <div><br></div>
    <div>Dr. Matuszek heads UMBC's <strong><a href="https://iral.cs.umbc.edu/" rel="nofollow external" class="bo">Interactive Robotics and Language lab</a></strong>, which studies robotics and natural language processing, with the goal of developing robots that everyday people can talk to, telling them to do tasks or about the world around them. A goal is to build robots that can perform tasks in noisy, real-world environments instead of being pre-emptively programmed to handle a fixed set of predetermined tasks.</div>
    <div><br></div>
    <div>You can see all nine of the 2023 GRIT-X talks in this <a href="https://www.youtube.com/watch?v=XBDH_cGi1fU" rel="nofollow external" class="bo"><strong>YouTube video</strong></a>. </div>
    </div>
    </div>
]]>
  </Body>
  <Summary>CSEE Professor Cynthia Matuszek gave a talk at UMBC's 2023 GRIT-X event on the need for physical robotic assistants to be able to understand and use human languages. Watch her 11-minute talk on...</Summary>
  <Website>https://www.youtube.com/watch?v=VhqmyYYbov4</Website>
  <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/137184/guest@my.umbc.edu/3a4701c2e88366edb92b8bcc74189084/api/pixel</TrackingUrl>
  <Tag>language</Tag>
  <Tag>matuszek</Tag>
  <Tag>nlp</Tag>
  <Tag>robotics</Tag>
  <Tag>speech</Tag>
  <Group token="csee">Computer Science and Electrical Engineering</Group>
  <GroupUrl>https://dev.my.umbc.edu/groups/csee</GroupUrl>
  <AvatarUrl>https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="original">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/original.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="xlarge">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="large">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/large.png?1314043393</AvatarUrl>
  <AvatarUrl size="medium">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/medium.png?1314043393</AvatarUrl>
  <AvatarUrl size="small">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/small.png?1314043393</AvatarUrl>
  <AvatarUrl size="xsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxsmall.png?1314043393</AvatarUrl>
  <Sponsor>Computer Science and Electrical Engineering</Sponsor>
  <PawCount>1</PawCount>
  <CommentCount>0</CommentCount>
  <CommentsAllowed>true</CommentsAllowed>
  <PostedAt>Sun, 19 Nov 2023 17:59:44 -0500</PostedAt>
  <EditAt>Sun, 19 Nov 2023 18:57:36 -0500</EditAt>
</NewsItem>
  <NewsItem contentIssues="true" id="137083" important="false" status="posted" url="https://dev.my.umbc.edu/groups/csee/posts/137083">
  <Title>AI4ALL Ignite program for Spring 2024</Title>
  <Tagline>Get practical training, a strong portfolio, career readiness</Tagline>
  <Body>
    <![CDATA[
    <div class="html-content">
    <img src="https://cybersecurity.umbc.edu/wp-content/uploads/sites/10/2023/11/Screenshot-2023-11-14-at-2.20.05-PM.png" style="max-width: 100%; height: auto;"><div><br></div>
    <div>
    <div>The nonprofit AI4ALL has offered its Discover AI program to UMBC students since 2021. They have now revised their programs and invite UMBC students in computing-related majors interested in Artificial Intelligence technology to apply for their new, free, online <strong><a href="https://ai-4-all.org/ai4all-ignite/" rel="nofollow external" class="bo">AI4ALL Ignite program</a> </strong>that begins in <strong>Spring 2024</strong>.<br>
    </div>
    <div><div>
    <div><br></div>
    <div>
    <strong>What is <a href="https://ai-4-all.org/" rel="nofollow external" class="bo">AI4ALL</a>? </strong>AI4ALL is a national nonprofit transforming the pipeline of AI practitioners and creating a more inclusive, human-centered discipline. It empowers students to be AI Changemakers by cultivating an environment where they develop skills in critical thinking and relationship-building and expertise in responsible AI. AI4ALL champions the diverse next-generation AI changemakers through education, ethics, and relationship-driven networks.</div>
    <div>
    <br><strong>What is <a href="https://ai-4-all.org/ai4all-ignite/" rel="nofollow external" class="bo">AI4ALL Ignite</a>? </strong>The AI4ALL Ignite program gives undergraduates the opportunity to <br><ul>
    <li>Work on an <strong>AI portfolio project</strong> with mentorship and guidance from AI industry experts</li>
    <li>Present their AI portfolio project in a student symposium and network with AI industry professionals.</li>
    <li>Participate in extensive and practical training in <strong>career readiness</strong> and technical AI internship interviews</li>
    <li>Train in mock AI technical interviews and network in opportunity chats with AI recruiters</li>
    </ul>
    <strong>How to <a href="https://app.smarterselect.com/programs/91903-Ai4-All?utm_source=Various&amp;utm_medium=Various&amp;utm_campaign=AI4ALL+Ignite+Fall+2023" rel="nofollow external" class="bo">apply</a>.</strong> Eligibility and Deadlines:<br><ul>
    <li>Spring 2024 Cohort Early Application Deadline: December 15, 2023</li>
    <li>Spring 2024 Cohort Final Application Deadline: January 22, 2024</li>
    </ul>
    <span>AI4ALL Ignite is open to undergraduate students in </span><span>a major<span> or </span>minor related to <span>computing</span><span>, </span>e.g., <span>C</span>omputer Science<span>, Computer Engineering</span>, <span>Information Systems</span>, Mathematics<span>, and Bioinformatics. </span></span>While all are welcome to apply, AI4ALL prioritizes students whose race, gender, or ethnicity has been historically excluded from AI: Black, Hispanic and Latinx, and Indigenous folks; and women, gender-expansive, and non-binary folks. You can find full eligibility on the <a href="https://ai-4-all.org/ai4all-ignite/" rel="nofollow external" class="bo"><strong>AI4ALL Ignite website</strong></a>.</div>
    <div><br></div>
    <div>
    <strong>For more information</strong>, sign up for a <a href="https://www.eventbrite.com/e/ai4all-programs-information-sessions-tickets-390703323157" rel="nofollow external" class="bo"><strong>45-minute online information session</strong></a> available during the coming weeks. <span>If you have questions, send email to <strong><a href="mailto:ai4all@cs.umbc.edu" rel="nofollow external" class="bo">ai4all@cs.umbc.edu</a></strong>.</span>
    </div>
    </div></div>
    </div>
    </div>
]]>
  </Body>
  <Summary>The nonprofit AI4ALL has offered its Discover AI program to UMBC students since 2021. They have now revised their programs and invite UMBC students in computing-related majors interested in...</Summary>
  <Website>https://ai-4-all.org/ai4all-ignite/</Website>
  <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/137083/guest@my.umbc.edu/d5c8132838c54a210478ce509c640f1e/api/pixel</TrackingUrl>
  <Tag>ai</Tag>
  <Tag>ai4all</Tag>
  <Group token="csee">Computer Science and Electrical Engineering</Group>
  <GroupUrl>https://dev.my.umbc.edu/groups/csee</GroupUrl>
  <AvatarUrl>https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="original">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/original.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="xlarge">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="large">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/large.png?1314043393</AvatarUrl>
  <AvatarUrl size="medium">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/medium.png?1314043393</AvatarUrl>
  <AvatarUrl size="small">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/small.png?1314043393</AvatarUrl>
  <AvatarUrl size="xsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxsmall.png?1314043393</AvatarUrl>
  <Sponsor>Computer Science and Electrical Engineering</Sponsor>
  <PawCount>0</PawCount>
  <CommentCount>0</CommentCount>
  <CommentsAllowed>true</CommentsAllowed>
  <PostedAt>Tue, 14 Nov 2023 14:40:45 -0500</PostedAt>
  <EditAt>Tue, 14 Nov 2023 14:46:40 -0500</EditAt>
</NewsItem>
</News>
