<?xml version="1.0"?>
<News hasArchived="false" page="2" pageCount="12" pageSize="10" timestamp="Thu, 30 Apr 2026 02:24:08 -0400" url="https://dev.my.umbc.edu/groups/csee/posts.xml?page=2&amp;tag=ai">
  <NewsItem contentIssues="true" id="149205" important="false" status="posted" url="https://dev.my.umbc.edu/groups/csee/posts/149205">
  <Title>Popular AIs head-to-head: OpenAI beats DeepSeek on sentence-level reasoning</Title>
  <Tagline>Article by UMBC Prof. Manas Gaur from The Conversation</Tagline>
  <Body>
    <![CDATA[
    <div class="html-content"><div><img src="https://images.theconversation.com/files/659376/original/file-20250402-56-rft1hz.jpg?ixlib=rb-4.1.0&amp;rect=0%2C360%2C7086%2C3977&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip" style="max-width: 100%; height: auto;">
            <br>DeepSeek’s language AI rocked the tech industry, but it comes up short on one measure.
              <span><a href="https://www.gettyimages.com/detail/news-photo/this-illustration-photograph-shows-screens-displaying-the-news-photo/2195925950" rel="nofollow external" class="bo">Lionel Bonaventure/AFP via Getty Images</a></span>
            <br>
        
    
      <h4><span><hr></span></h4><h4><span><a href="https://theconversation.com/profiles/manas-gaur-2312608" rel="nofollow external" class="bo">Manas Gaur</a></span></h4></div><div><em><br></em>
    
      <p>ChatGPT and other AI chatbots based on large language models are known to occasionally make things up, including <a href="https://doi.org/10.1001/jamanetworkopen.2023.27647" rel="nofollow external" class="bo">scientific and</a> <a href="https://doi.org/10.48550/arXiv.2405.20362" rel="nofollow external" class="bo">legal citations</a>. It turns out that measuring how accurate an AI model’s citations are is a good way of assessing the model’s reasoning abilities.</p>
    
    <p>An AI model “reasons” by breaking down a query into steps and working through them in order. Think of how you learned to solve math word problems in school.</p>
    
    <p>Ideally, to generate citations an AI model would understand the key concepts in a document, generate a ranked list of relevant papers to cite, and provide convincing reasoning for how each suggested paper supports the corresponding text. It would highlight specific connections between the text and the cited research, clarifying why each source matters.  </p>
    
    <p>The question is, can today’s models be trusted to make these connections and provide clear reasoning that justifies their source choices? The answer goes beyond citation accuracy to address how useful and accurate large language models are for any information retrieval purpose.</p>
    
    <p>I’m a <a href="https://scholar.google.co.in/citations?hl=en&amp;user=VJ8ZdCEAAAAJ&amp;view_op=list_works&amp;sortby=pubdate" rel="nofollow external" class="bo">computer scientist</a>. My colleagues − researchers from the AI Institute at the University of South Carolina, Ohio State University and University of Maryland Baltimore County − and I have developed the <a href="https://doi.org/10.48550/arXiv.2405.02228" rel="nofollow external" class="bo">Reasons benchmark</a> to test how well large language models can automatically generate research citations and provide understandable reasoning.</p>
    
    <p>We used the benchmark to <a href="https://doi.org/10.48550/arXiv.2405.02228" rel="nofollow external" class="bo">compare the performance</a> of two popular AI reasoning models, DeepSeek’s R1 and OpenAI’s o1. Though DeepSeek <a href="https://www.theguardian.com/business/2025/jan/27/tech-shares-asia-europe-fall-china-ai-deepseek" rel="nofollow external" class="bo">made headlines</a> with its stunning <a href="https://theconversation.com/why-building-big-ais-costs-billions-and-how-chinese-startup-deepseek-dramatically-changed-the-calculus-248431" rel="nofollow external" class="bo">efficiency and cost-effectiveness</a>, the Chinese upstart has a way to go to match OpenAI’s reasoning performance.</p>
    
    <h2>Sentence specific</h2>
    
    <p>The accuracy of citations has a lot to do with whether the AI model is reasoning about information <a href="https://doi.org/10.48550/arXiv.2405.17980" rel="nofollow external" class="bo">at the sentence level</a> rather than paragraph or document level. Paragraph-level and document-level citations can be thought of as throwing a large chunk of information into a large language model and asking it to provide many citations. </p>
    
    <p>In this process, the large language model overgeneralizes and misinterprets individual sentences. The user ends up with citations that <a href="https://doi.org/10.48550/arXiv.2409.02897" rel="nofollow external" class="bo">explain the whole paragraph or document</a>, not the relatively fine-grained information in the sentence.</p>
    
    <p>Further, reasoning suffers when you ask the large language model to read through an entire document. These models mostly rely on memorizing patterns that they typically are better at finding at the beginning and end of longer texts <a href="https://doi.org/10.48550/arXiv.2307.03172" rel="nofollow external" class="bo">than in the middle</a>. This makes it difficult for them to fully understand all the important information throughout a long document.</p>
    
    <p>Large language models get confused because paragraphs and documents hold a lot of information, which affects citation generation and the reasoning process. Consequently, reasoning from large language models over paragraphs and documents becomes more like <a href="https://doi.org/10.48550/arXiv.2411.17375" rel="nofollow external" class="bo">summarizing or paraphrasing</a>.</p>
    
    <p>The Reasons benchmark addresses this weakness by examining large language models’ citation generation and reasoning. </p>
    
    
                <div class="embed-container"><iframe src="https://www.youtube.com/embed/kQZzYMHre0U?wmode=transparent&amp;start=0" frameborder="0" webkitAllowFullScreen="webkitAllowFullScreen" mozallowfullscreen="mozallowfullscreen" allowFullScreen="allowFullScreen">[Video]</iframe></div>
                <span>How DeepSeek R1 and OpenAI o1 compare generally on logic problems.</span>
              
    
    <h2>Testing citations and reasoning</h2>
    
    <p>Following the release of DeepSeek R1 in January 2025, we wanted to examine its accuracy in generating citations and its quality of reasoning and compare it with OpenAI’s o1 model. We created a paragraph that had sentences from different sources, gave the models individual sentences from this paragraph, and asked for citations and reasoning. </p>
    
    <p>To start our test, we developed a small test bed of about 4,100 research articles around four key topics that are related to human brains and computer science: neurons and cognition, human-computer interaction, databases and artificial intelligence. We evaluated the models using two measures: F-1 score, which measures how accurate the provided citation is, and hallucination rate, which measures how sound the model’s reasoning is − that is, how often it <a href="https://theconversation.com/what-are-ai-hallucinations-why-ais-sometimes-make-things-up-242896" rel="nofollow external" class="bo">produces an inaccurate or misleading response</a>. </p>
    
    <p>Our testing revealed <a href="https://doi.org/10.48550/arXiv.2405.02228" rel="nofollow external" class="bo">significant performance differences</a> between OpenAI o1 and DeepSeek R1 across different scientific domains. OpenAI’s o1 did well connecting information between different subjects, such as understanding how research on neurons and cognition connects to human-computer interaction and then to concepts in artificial intelligence, while remaining accurate. Its performance metrics consistently outpaced DeepSeek R1’s across all evaluation categories, especially in reducing hallucinations and successfully completing assigned tasks. </p>
    
    <p>OpenAI o1 was better at combining ideas semantically, whereas R1 focused on making sure it generated a response for every attribution task, which in turn increased hallucination during reasoning. OpenAI o1 had a hallucination rate of approximately 35% compared with DeepSeek R1’s rate of nearly 85% in the attribution-based reasoning task.</p>
    
    <p>In terms of accuracy and linguistic competence, OpenAI o1 scored about 0.65 on the F-1 test, which means it was right about 65% of the time when answering questions. It also scored about 0.70 on the BLEU test, which measures how well a language model writes in natural language. These are pretty good scores. </p>
    
    <p>DeepSeek R1 scored lower, with about 0.35 on the F-1 test, meaning it was right about 35% of the time. However, its BLEU score was only about 0.2, which means its writing wasn’t as natural-sounding as OpenAI’s o1. This shows that o1 was better at presenting that information in clear, natural language.</p>
    
    <h2>OpenAI holds the advantage</h2>
    
    <p>On other benchmarks, DeepSeek R1 <a href="https://doi.org/10.1038/d41586-025-00229-6" rel="nofollow external" class="bo">performs on par</a> with OpenAI o1 on math, coding and scientific reasoning tasks. But the substantial difference on our benchmark suggests that o1 provides more reliable information, while R1 struggles with factual consistency. </p>
    
    <p>Though we included other models in our comprehensive testing, the performance gap between o1 and R1 specifically highlights the current competitive landscape in AI development, with OpenAI’s offering maintaining a significant advantage in reasoning and knowledge integration capabilities.</p>
    
    <p>These results suggest that OpenAI still has a leg up when it comes to source attribution and reasoning, possibly due to the nature and volume of the data it was trained on. The company recently announced its <a href="https://doi.org/10.1038/d41586-025-00377-9" rel="nofollow external" class="bo">deep research tool</a>, which can create reports with citations, ask follow-up questions and provide reasoning for the generated response. </p>
    
    <p>The jury is still out on the tool’s value for researchers, but the caveat remains for everyone: Double-check all citations an AI gives you.</p>
    
      <p><span><a href="https://theconversation.com/profiles/manas-gaur-2312608" rel="nofollow external" class="bo">Manas Gaur</a>, Assistant Professor of Computer Science and Electrical Engineering, <em><a href="https://theconversation.com/institutions/university-of-maryland-baltimore-county-1667" rel="nofollow external" class="bo">University of Maryland, Baltimore County</a></em></span></p>
    
      <p>This article is republished from <a href="https://theconversation.com" rel="nofollow external" class="bo">The Conversation</a> under a Creative Commons license. Read the <a href="https://theconversation.com/popular-ais-head-to-head-openai-beats-deepseek-on-sentence-level-reasoning-249109" rel="nofollow external" class="bo">original article</a>.</p>
    </div></div>
]]>
  </Body>
  <Summary>DeepSeek’s language AI rocked the tech industry, but it comes up short on one measure.           Lionel Bonaventure/AFP via Getty Images                       Manas Gaur         ChatGPT and other...</Summary>
  <Website>https://theconversation.com/popular-ais-head-to-head-openai-beats-deepseek-on-sentence-level-reasoning-249109</Website>
  <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/149205/guest@my.umbc.edu/c453928d24259c1806390603f2878a73/api/pixel</TrackingUrl>
  <Tag>ai</Tag>
  <Tag>chatgpt</Tag>
  <Tag>deepseek</Tag>
  <Tag>large-language-model</Tag>
  <Tag>llm</Tag>
  <Tag>openai</Tag>
  <Group token="csee">Computer Science and Electrical Engineering</Group>
  <GroupUrl>https://dev.my.umbc.edu/groups/csee</GroupUrl>
  <AvatarUrl>https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="original">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/original.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="xlarge">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="large">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/large.png?1314043393</AvatarUrl>
  <AvatarUrl size="medium">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/medium.png?1314043393</AvatarUrl>
  <AvatarUrl size="small">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/small.png?1314043393</AvatarUrl>
  <AvatarUrl size="xsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxsmall.png?1314043393</AvatarUrl>
  <Sponsor>Computer Science and Electrical Engineering</Sponsor>
  <PawCount>0</PawCount>
  <CommentCount>0</CommentCount>
  <CommentsAllowed>true</CommentsAllowed>
  <PostedAt>Thu, 17 Apr 2025 16:28:01 -0400</PostedAt>
  <EditAt>Thu, 17 Apr 2025 16:44:15 -0400</EditAt>
</NewsItem>
  <NewsItem contentIssues="true" id="148729" important="false" status="posted" url="https://dev.my.umbc.edu/groups/csee/posts/148729">
  <Title>CSEE alumna Dr. Randi Williams to give URCAD 29 keynote 4/16</Title>
  <Tagline>12-1pm April 16, 2025 in UC 312</Tagline>
  <Body>
    <![CDATA[
    <div class="html-content"><img src="https://urcad.umbc.edu/wp-content/uploads/sites/382/2025/02/urcad2025_Landscape_B.jpg" style="max-width: 100%; height: auto;"><div><br></div><div><span><p><span>CSEE alumna </span><a href="https://randiwilliams.com/" rel="nofollow external" class="bo"><span><strong>Dr. Randi Williams</strong></span></a><span> ’16, B.S., computer engineering, is the </span><a href="https://urcad.umbc.edu/keynotespeaker/" rel="nofollow external" class="bo"><span><strong>keynote speaker</strong></span></a><span> for UMBC’s Undergraduate Research and Creative Achievement Day (</span><a href="https://urcad.umbc.edu/" rel="nofollow external" class="bo"><span><strong>URCAD</strong></span></a><span>). She will give the keynote talk from 12-1 pm on April 16 in University Center room 312.</span></p><p><span>Dr. Williams earned her Ph.D. (’24) and M.S. (’18) from MIT where she was a member of Media Lab's </span><a href="https://www.media.mit.edu/groups/personal-robots/overview/" rel="nofollow external" class="bo"><span><strong>Personal Robots Group</strong></span></a><span>. At UMBC, she was a Meyerhoff Scholar, Honors College student, CWIT affiliate, a founder of HackUMBC, and worked with Dr. Nilanjan Banerjee in the Mobile, Pervasive, and Sensor Systems Laboratory. She received a computer engineering B.S. degree in 2016.</span></p><p><span>She is currently leading research at </span><a href="https://dayofai.org/" rel="nofollow external" class="bo"><span><strong>Day of AI</strong></span></a>, <span>where she works to help equip K-12 students of all backgrounds and abilities to thrive in an AI-driven world. She will join CMU as a research professor in the </span><span><a href="https://hcii.cmu.edu/" rel="nofollow external" class="bo"><strong>Human Computer Interaction Institute</strong></a> </span><span>in July 2026.</span></p><p><span>See the complete schedule of URCAD presentations, posters, exhibits, films and interactive video games <strong><a href="https://urcad.umbc.edu/schedule/" rel="nofollow external" class="bo">here.</a></strong></span></p></span></div></div>
]]>
  </Body>
  <Summary>CSEE alumna Dr. Randi Williams ’16, B.S., computer engineering, is the keynote speaker for UMBC’s Undergraduate Research and Creative Achievement Day (URCAD). She will give the keynote talk from...</Summary>
  <Website>https://urcad.umbc.edu/keynotespeaker/</Website>
  <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/148729/guest@my.umbc.edu/c153fe43506d866b21d6cc8c9a4e812c/api/pixel</TrackingUrl>
  <Tag>ai</Tag>
  <Tag>keynote</Tag>
  <Tag>robotics</Tag>
  <Tag>umbc</Tag>
  <Group token="csee">Computer Science and Electrical Engineering</Group>
  <GroupUrl>https://dev.my.umbc.edu/groups/csee</GroupUrl>
  <AvatarUrl>https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="original">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/original.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="xlarge">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="large">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/large.png?1314043393</AvatarUrl>
  <AvatarUrl size="medium">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/medium.png?1314043393</AvatarUrl>
  <AvatarUrl size="small">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/small.png?1314043393</AvatarUrl>
  <AvatarUrl size="xsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxsmall.png?1314043393</AvatarUrl>
  <Sponsor>Computer Science and Electrical Engineering</Sponsor>
  <PawCount>1</PawCount>
  <CommentCount>0</CommentCount>
  <CommentsAllowed>true</CommentsAllowed>
  <PostedAt>Sun, 06 Apr 2025 17:49:40 -0400</PostedAt>
</NewsItem>
  <NewsItem contentIssues="false" id="148715" important="false" status="posted" url="https://dev.my.umbc.edu/groups/csee/posts/148715">
  <Title>UMBC receive $3.8M DARPA award to assess the feasibility of scientific claims</Title>
  <Body>
    <![CDATA[
    <div class="html-content"><span><span>A multidisciplinary team led by UMBC COEIT researchers was awarded $3.8M from DARPA's <a href="https://www.darpa.mil/research/programs/scientific-feasibility" rel="nofollow external" class="bo"><strong>SciFy: Scientific Feasibility</strong></a> program to explore new computational methods for assessing the feasibility of scientific claims. The team is led by CSEE Professor <a href="https://userpages.cs.umbc.edu/ferraro/" rel="nofollow external" class="bo"><strong>Frank Ferraro</strong></a> and includes UMBC faculty <a href="https://www.tejasgokhale.com/" rel="nofollow external" class="bo"><strong>Tejas Gokhale</strong></a> (CSEE) and <a href="https://cbee.umbc.edu/josephson/" rel="nofollow external" class="bo"><strong>Tyler Josephson</strong></a> (CBEE), as well as colleagues from Stony Brook University, the University of Texas at Austin, and the University of Cambridge.</span></span><div><span><br></span></div><div><span><p><span>The key problem to be addressed is developing a process that can break down claims in a given scientific domain into their constituent components that can then be assessed. During the 32 month program, the researchers will develop and test their tools on three leading areas of scientific research: materials science, AI, and quantum computing. These </span><span>domains were chosen to represent a progression in scientific complexity.</span></p><p><span>Read more about this new research award </span><a href="https://umbc.edu/quick-posts/ai-to-assess-the-feasibility-of-scientific-claims/" rel="nofollow external" class="bo"><strong>here</strong></a><span>.</span></p></span></div></div>
]]>
  </Body>
  <Summary>A multidisciplinary team led by UMBC COEIT researchers was awarded $3.8M from DARPA's SciFy: Scientific Feasibility program to explore new computational methods for assessing the feasibility of...</Summary>
  <Website>https://umbc.edu/quick-posts/ai-to-assess-the-feasibility-of-scientific-claims/</Website>
  <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/148715/guest@my.umbc.edu/24c4825f7339a1660efcf08ebe8a3aec/api/pixel</TrackingUrl>
  <Tag>ai</Tag>
  <Tag>darpa</Tag>
  <Tag>llm</Tag>
  <Tag>research</Tag>
  <Tag>scyfy</Tag>
  <Group token="csee">Computer Science and Electrical Engineering</Group>
  <GroupUrl>https://dev.my.umbc.edu/groups/csee</GroupUrl>
  <AvatarUrl>https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="original">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/original.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="xlarge">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="large">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/large.png?1314043393</AvatarUrl>
  <AvatarUrl size="medium">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/medium.png?1314043393</AvatarUrl>
  <AvatarUrl size="small">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/small.png?1314043393</AvatarUrl>
  <AvatarUrl size="xsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxsmall.png?1314043393</AvatarUrl>
  <Sponsor>Computer Science and Electrical Engineering</Sponsor>
  <PawCount>0</PawCount>
  <CommentCount>0</CommentCount>
  <CommentsAllowed>true</CommentsAllowed>
  <PostedAt>Fri, 04 Apr 2025 17:35:36 -0400</PostedAt>
</NewsItem>
  <NewsItem contentIssues="true" id="148459" important="false" status="posted" url="https://dev.my.umbc.edu/groups/csee/posts/148459">
    <Title>Workshop: Ethical Leadership in the Age of AI 4/8</Title>
    <Tagline>3 sessions on Tue., April 8 at 10-11:30, 12-1:30, and 2-3:30</Tagline>
    <Body>
      <![CDATA[
          <div class="html-content"><span><p><span>UMBC Campus Life will hold workshops on <strong>Ethical Leadership in the Age of AI</strong> on Tuesday, April 8, 2025, in Commons 331. The workshop is aimed at students, but faculty and staff are welcome to attend as well. To accommodate schedules, the 90-minute workshop will be held starting at 10 am, Noon, and 2 pm.</span></p><p><span>The workshop will address what it means to engage with AI systems in an increasingly digital world and cover the principles of ethical leadership, AI challenges and governance, ethical dilemmas with AI, and more.</span></p></span><div><span><a href="https://forms.gle/G1XdC7XPzW47TzBm7" rel="nofollow external" class="bo"><strong>Register here</strong></a> for one of the three sessions and answer a few optional questions to let the organizers know more about what you want to learn. You will then receive a confirmation email with some additional information.</span><div><br></div></div></div>
      ]]>
    </Body>
    <Summary>UMBC Campus Life will hold workshops on Ethical Leadership in the Age of AI on Tuesday, April 8, 2025, in Commons 331. The workshop is aimed at students, but faculty and staff are welcome to...</Summary>
    <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/148459/guest@my.umbc.edu/dd1e483d64079a726534b0702ad51169/api/pixel</TrackingUrl>
    <Tag>ai</Tag>
    <Tag>ethics</Tag>
    <Tag>workshop</Tag>
    <Group token="csee">Computer Science and Electrical Engineering</Group>
    <GroupUrl>https://dev.my.umbc.edu/groups/csee</GroupUrl>
    <AvatarUrl>https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
    <AvatarUrl size="original">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/original.png?1314043393</AvatarUrl>
    <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxlarge.png?1314043393</AvatarUrl>
    <AvatarUrl size="xlarge">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xlarge.png?1314043393</AvatarUrl>
    <AvatarUrl size="large">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/large.png?1314043393</AvatarUrl>
    <AvatarUrl size="medium">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/medium.png?1314043393</AvatarUrl>
    <AvatarUrl size="small">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/small.png?1314043393</AvatarUrl>
    <AvatarUrl size="xsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
    <AvatarUrl size="xxsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxsmall.png?1314043393</AvatarUrl>
    <Sponsor>UMBC Campus Life</Sponsor>
    <ThumbnailUrl size="xxlarge">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/148/459/bc79f98852bdfe9d25af52b7e5269ce5/xxlarge.jpg?1743428291</ThumbnailUrl>
    <ThumbnailUrl size="xlarge">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/148/459/bc79f98852bdfe9d25af52b7e5269ce5/xlarge.jpg?1743428291</ThumbnailUrl>
    <ThumbnailUrl size="large">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/148/459/bc79f98852bdfe9d25af52b7e5269ce5/large.jpg?1743428291</ThumbnailUrl>
    <ThumbnailUrl size="medium">https://assets1-dev.my.umbc.edu/system/shared/thumbnails/news/000/148/459/bc79f98852bdfe9d25af52b7e5269ce5/medium.jpg?1743428291</ThumbnailUrl>
    <ThumbnailUrl size="small">https://assets4-dev.my.umbc.edu/system/shared/thumbnails/news/000/148/459/bc79f98852bdfe9d25af52b7e5269ce5/small.jpg?1743428291</ThumbnailUrl>
    <ThumbnailUrl size="xsmall">https://assets2-dev.my.umbc.edu/system/shared/thumbnails/news/000/148/459/bc79f98852bdfe9d25af52b7e5269ce5/xsmall.jpg?1743428291</ThumbnailUrl>
    <ThumbnailUrl size="xxsmall">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/148/459/bc79f98852bdfe9d25af52b7e5269ce5/xxsmall.jpg?1743428291</ThumbnailUrl>
    <PawCount>1</PawCount>
    <CommentCount>0</CommentCount>
    <CommentsAllowed>true</CommentsAllowed>
    <PostedAt>Mon, 31 Mar 2025 09:39:56 -0400</PostedAt>
    <EditAt>Mon, 31 Mar 2025 09:40:58 -0400</EditAt>
  </NewsItem>
  <NewsItem contentIssues="true" id="148353" important="false" status="posted" url="https://dev.my.umbc.edu/groups/csee/posts/148353">
    <Title>Expanded set of powerful AI tools for the UMBC community</Title>
    <Body>
      <![CDATA[
          <div class="html-content"><p><span>UMBC’s Division of Information Technology </span><a href="https://doit.umbc.edu/ai/post/148330/" rel="nofollow external" class="bo"><span><strong>announced</strong></span></a><span> the expanded availability of powerful AI tools to support the UMBC community in </span><span>teaching, learning, research, and productivity.</span></p><p><span>All UMBC accounts now have access to the free version of Google </span><a href="https://gemini.google.com/app" rel="nofollow external" class="bo"><span><strong>Gemini</strong></span></a><span>, which has a wide range of AI-powered features that can assist with writing, brainstorming, and information-gathering tasks. The Gemini suite also includes the new </span><a href="http://notebooklm.google.com/" rel="nofollow external" class="bo"><span><strong>notebookLM</strong></span></a><span> system that helps users understand and organize information from uploaded documents. It can also generate audio overviews of that information and Gemini’s </span><a href="https://gemini.google/overview/deep-research/" rel="nofollow external" class="bo"><span><strong>Deep Research</strong></span></a><span> tool. </span><span>Paid licenses for </span><a href="https://gemini.google/advanced/" rel="nofollow external" class="bo"><span><strong>Gemini Advanced</strong></span></a><span> will be available in some cases.</span></p><p><span>All UMBC accounts now have default access to the free version of Microsoft </span><a href="http://copilot.microsoft.com/" rel="nofollow external" class="bo"><span><strong>Copilot</strong></span></a><span>, which provides a model for brainstorming, email generation, and more. Paid licenses for Copilot Pro that allow integration with the Microsoft 365 suite (Word, Excel, Teams, …) are available in some cases.</span></p><p><span>UMBC now has a local instance of Amplify, an innovative AI tool developed by </span><a href="https://www.vanderbilt.edu/generative-ai/custom-software-pilot-amplify/" rel="nofollow external" class="bo"><span><strong>Vanderbilt University</strong></span></a><span> that provides access to commercial AI models hosted within UMBC's secure cloud environment, including from OpenAI, such as GPT-4o or o1 series, Meta Llama, or Anthropic Claude models. Access to UMBC's Amplify is available to faculty and staff upon submission of an </span><a href="https://doit.umbc.edu/ai/support" rel="nofollow external" class="bo"><span><strong>access request</strong></span></a><span>.</span></p><span>See DoIT’s </span><a href="https://doit.umbc.edu/ai/genai-tools/" rel="nofollow external" class="bo"><span><strong>GenAI Tools</strong></span></a><span> page for more information.</span></div>
      ]]>
    </Body>
    <Summary>UMBC’s Division of Information Technology announced the expanded availability of powerful AI tools to support the UMBC community in teaching, learning, research, and productivity.  All UMBC...</Summary>
    <Website>https://doit.umbc.edu/ai/genai-tools/</Website>
    <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/148353/guest@my.umbc.edu/11bc3d399772b840a51d6abe210b0b56/api/pixel</TrackingUrl>
    <Tag>ai</Tag>
    <Tag>amplify</Tag>
    <Tag>copilot</Tag>
    <Tag>gemini</Tag>
    <Tag>genai</Tag>
    <Tag>llm</Tag>
    <Tag>notebooklm</Tag>
    <Group token="csee">Computer Science and Electrical Engineering</Group>
    <GroupUrl>https://dev.my.umbc.edu/groups/csee</GroupUrl>
    <AvatarUrl>https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
    <AvatarUrl size="original">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/original.png?1314043393</AvatarUrl>
    <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxlarge.png?1314043393</AvatarUrl>
    <AvatarUrl size="xlarge">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xlarge.png?1314043393</AvatarUrl>
    <AvatarUrl size="large">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/large.png?1314043393</AvatarUrl>
    <AvatarUrl size="medium">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/medium.png?1314043393</AvatarUrl>
    <AvatarUrl size="small">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/small.png?1314043393</AvatarUrl>
    <AvatarUrl size="xsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
    <AvatarUrl size="xxsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxsmall.png?1314043393</AvatarUrl>
    <Sponsor>Computer Science and Electrical Engineering</Sponsor>
    <ThumbnailUrl size="xxlarge">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/148/353/57302375b8147a9d3a80eb5d60b6c371/xxlarge.jpg?1743020058</ThumbnailUrl>
    <ThumbnailUrl size="xlarge">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/148/353/57302375b8147a9d3a80eb5d60b6c371/xlarge.jpg?1743020058</ThumbnailUrl>
    <ThumbnailUrl size="large">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/148/353/57302375b8147a9d3a80eb5d60b6c371/large.jpg?1743020058</ThumbnailUrl>
    <ThumbnailUrl size="medium">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/148/353/57302375b8147a9d3a80eb5d60b6c371/medium.jpg?1743020058</ThumbnailUrl>
    <ThumbnailUrl size="small">https://assets2-dev.my.umbc.edu/system/shared/thumbnails/news/000/148/353/57302375b8147a9d3a80eb5d60b6c371/small.jpg?1743020058</ThumbnailUrl>
    <ThumbnailUrl size="xsmall">https://assets2-dev.my.umbc.edu/system/shared/thumbnails/news/000/148/353/57302375b8147a9d3a80eb5d60b6c371/xsmall.jpg?1743020058</ThumbnailUrl>
    <ThumbnailUrl size="xxsmall">https://assets4-dev.my.umbc.edu/system/shared/thumbnails/news/000/148/353/57302375b8147a9d3a80eb5d60b6c371/xxsmall.jpg?1743020058</ThumbnailUrl>
    <PawCount>1</PawCount>
    <CommentCount>1</CommentCount>
    <CommentsAllowed>true</CommentsAllowed>
    <PostedAt>Wed, 26 Mar 2025 16:18:43 -0400</PostedAt>
  </NewsItem>
  <NewsItem contentIssues="true" id="147756" important="false" status="posted" url="https://dev.my.umbc.edu/groups/csee/posts/147756">
  <Title>Two AI events featuring Virginia Dignum on Wednesday March 5</Title>
  <Body>
    <![CDATA[
    <div class="html-content"><div><span>Well-known AI researcher </span><a href="https://en.wikipedia.org/wiki/Virginia_Dignum" rel="nofollow external" class="bo"><span><strong>Virginia Dignum</strong></span></a><span> will lead two </span><span>in-person</span><span> events on </span><span>Wednesday, March 5</span><span> at UMBC sponsored by the Department of English. Both will take place in room 216 of the Performing Arts and Humanities Building.</span></div><div><span><br></span></div><div><div><div><span><p><span>From </span><span>12-1 pm</span><span> she will lead a workshop on </span><a href="https://my3.my.umbc.edu/groups/dreshercenter/events/137495" rel="nofollow external" class="bo"><span><strong>AI and the Humanities</strong></span></a><span> in PAHB 216 that covers real-life examples of AI problems that have benefited from the critical, interpretive and analytical capabilities that humanities training can supply. </span><a href="https://docs.google.com/forms/d/e/1FAIpQLSd53mS37KJxT4cJTjTA2Tz7n0RZ8sSwkmPaVxPXYTU8bb14ag/viewform" rel="nofollow external" class="bo"><span><strong>Register here</strong></span></a><span>.</span></p><p><span>From </span><span>4-5 pm</span><span> she will give a talk on the </span><a href="https://my3.my.umbc.edu/groups/dreshercenter/events/139560" rel="nofollow external" class="bo"><span><strong>AI Paradox</strong></span></a><span> also in PAHB 216. She will discuss the often contradictory nature of AI, exploring how its advancements highlight the irreplaceable qualities of human intelligence and the importance of governance. </span><a href="https://my3.my.umbc.edu/groups/dreshercenter/events/139560" rel="nofollow external" class="bo"><span><strong>Register here</strong></span></a><span>.</span></p><p><a href="https://www.umu.se/en/staff/virginia-dignum/" rel="nofollow external" class="bo"><span><strong>Virginia Dignum</strong></span></a><span><strong> </strong>is Professor of Responsible Artificial Intelligence at Umeå University, Sweden where she leads the </span><a href="https://aipolicylab.se/" rel="nofollow external" class="bo"><span>AI Policy Lab</span></a><span>. She is also senior advisor on AI policy to the Wallenberg Foundations. She has a PHD in Artificial Intelligence from Utrecht University in 2004, is a member of the Royal Swedish Academy of Engineering Sciences, and Fellow of the European Artificial Intelligence Association. She is a member of the United Nations Advisory Body on AI, the Global Partnership on AI, UNESCO’s expert group on the implementation of AI recommendations, OECD’s Expert group on AI, founder of ALLAI, the Dutch AI Alliance, and co-chair of the WEF’s Global Future Council on AI. She was a member of the EU’s High Level Expert Group on Artificial Intelligence and leader of UNICEF's guidance for AI and children. Her new book </span><span><strong>The AI Paradox</strong></span><span>, is planned for publication in 2025.</span></p></span></div></div></div></div>
]]>
  </Body>
  <Summary>Well-known AI researcher Virginia Dignum will lead two in-person events on Wednesday, March 5 at UMBC sponsored by the Department of English. Both will take place in room 216 of the Performing...</Summary>
  <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/147756/guest@my.umbc.edu/7bae14e66e6116586be1c27748201678/api/pixel</TrackingUrl>
  <Tag>ai</Tag>
  <Tag>dignum</Tag>
  <Group token="csee">Computer Science and Electrical Engineering</Group>
  <GroupUrl>https://dev.my.umbc.edu/groups/csee</GroupUrl>
  <AvatarUrl>https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="original">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/original.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="xlarge">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="large">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/large.png?1314043393</AvatarUrl>
  <AvatarUrl size="medium">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/medium.png?1314043393</AvatarUrl>
  <AvatarUrl size="small">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/small.png?1314043393</AvatarUrl>
  <AvatarUrl size="xsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxsmall.png?1314043393</AvatarUrl>
  <Sponsor>Department of English</Sponsor>
  <ThumbnailUrl size="xxlarge">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/147/756/570ad34b315e6329f1d12872ecb9a7ea/xxlarge.jpg?1740958259</ThumbnailUrl>
  <ThumbnailUrl size="xlarge">https://assets1-dev.my.umbc.edu/system/shared/thumbnails/news/000/147/756/570ad34b315e6329f1d12872ecb9a7ea/xlarge.jpg?1740958259</ThumbnailUrl>
  <ThumbnailUrl size="large">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/147/756/570ad34b315e6329f1d12872ecb9a7ea/large.jpg?1740958259</ThumbnailUrl>
  <ThumbnailUrl size="medium">https://assets2-dev.my.umbc.edu/system/shared/thumbnails/news/000/147/756/570ad34b315e6329f1d12872ecb9a7ea/medium.jpg?1740958259</ThumbnailUrl>
  <ThumbnailUrl size="small">https://assets2-dev.my.umbc.edu/system/shared/thumbnails/news/000/147/756/570ad34b315e6329f1d12872ecb9a7ea/small.jpg?1740958259</ThumbnailUrl>
  <ThumbnailUrl size="xsmall">https://assets4-dev.my.umbc.edu/system/shared/thumbnails/news/000/147/756/570ad34b315e6329f1d12872ecb9a7ea/xsmall.jpg?1740958259</ThumbnailUrl>
  <ThumbnailUrl size="xxsmall">https://assets2-dev.my.umbc.edu/system/shared/thumbnails/news/000/147/756/570ad34b315e6329f1d12872ecb9a7ea/xxsmall.jpg?1740958259</ThumbnailUrl>
  <PawCount>0</PawCount>
  <CommentCount>0</CommentCount>
  <CommentsAllowed>true</CommentsAllowed>
  <PostedAt>Sun, 02 Mar 2025 18:38:57 -0500</PostedAt>
</NewsItem>
  <NewsItem contentIssues="true" id="146825" important="false" status="posted" url="https://dev.my.umbc.edu/groups/csee/posts/146825">
  <Title>Talk: Do LLMs Exhibit Cybersecurity Misconceptions? 1/31 online</Title>
  <Tagline>Evaluation of LLMs on CCI and CCA examinations</Tagline>
  <Body>
    <![CDATA[
    <div class="html-content"><h4>Do LLMs Show Cybersecurity Misconceptions?<br></h4><h5>Evaluation of LLMs Performance on Cybersecurity Concept Inventories</h5><h5>Shan Huang, UIUC</h5><div><strong>Joint work with Jeffrey Herman and Alan Sherman, et al.</strong></div><div><strong>12:00–1pm ET Friday, Jan. 31, 2025, <a href="https://umbc.webex.com/meet/sherman" rel="nofollow external" class="bo">online</a></strong> </div><div><br></div><div>We evaluated the performance of five LLMs (Llama a, GPT-3.5-turbo, GPT-4, GPT-4O, and GPT-O1) on two cybersecurity concept inventories: <a href="https://dl.acm.org/doi/fullHtml/10.1145/3451346" rel="nofollow external" class="bo"><strong>Cybersecurity Concept Inventory</strong></a> (CCI) and <strong><a href="https://dl.acm.org/doi/10.1145/3545945.3569762" rel="nofollow external" class="bo">Cybersecurity Curriculum Assessment</a> </strong>(CCA). Using a zero-shot setting to minimize external influencing factors, we compared the performance of these LLMs with that of students previously studied, and we conducted a qualitative analysis of GPT-O1's output to examine if it exhibits misconceptions. Quantitative analysis reveals that, for the CCI and CCA, GPT-O1 significantly outperformed other models and students, correctly answering 92% of CCI and 72% of CCA test items. These results indicate GPT-O1’s strong proficiency in foundational topics (CCI) but reveal its limitations in addressing these concepts in more technically advanced scenarios (CCA). Qualitative analysis of GPT-O1’s reasoning patterns uncovered instances of insightful reasoning but also highlighted ways in which GPT-O1's answers reflect persistent student mistakes, such as biases, overgeneralizations, and logical inconsistencies. This work highlights the significant potential of GPT-O1 as a tool for introductory cybersecurity education in its ability to provide detailed explanations and structured reasoning for novice learners.</div><div><br></div><div><strong><a href="https://www.linkedin.com/in/shan-huang-262041193/" rel="nofollow external" class="bo">Shan Huang</a> </strong>is a Ph.D. candidate in Computer Science at the University of Illinois Urbana-Champaign. She is broadly interested in how educational games can improve student learning. Current work includes improving student learning in cybersecurity with educational games and accessing student knowledge of cybersecurity concepts. Shan is also involved in various educational data mining projects.</div><div><br></div></div>
]]>
  </Body>
  <Summary>Do LLMs Show Cybersecurity Misconceptions?   Evaluation of LLMs Performance on Cybersecurity Concept Inventories  Shan Huang, UIUC  Joint work with Jeffrey Herman and Alan Sherman, et al....</Summary>
  <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/146825/guest@my.umbc.edu/044914d4a9cdbb4e5ab72a950671238d/api/pixel</TrackingUrl>
  <Tag>ai</Tag>
  <Tag>cca</Tag>
  <Tag>cci</Tag>
  <Tag>cybersecurity</Tag>
  <Tag>llm</Tag>
  <Group token="csee">Computer Science and Electrical Engineering</Group>
  <GroupUrl>https://dev.my.umbc.edu/groups/csee</GroupUrl>
  <AvatarUrl>https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="original">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/original.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="xlarge">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="large">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/large.png?1314043393</AvatarUrl>
  <AvatarUrl size="medium">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/medium.png?1314043393</AvatarUrl>
  <AvatarUrl size="small">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/small.png?1314043393</AvatarUrl>
  <AvatarUrl size="xsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxsmall.png?1314043393</AvatarUrl>
  <Sponsor>UMBC Cyber Defense Lab</Sponsor>
  <ThumbnailUrl size="xxlarge">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/146/825/3def1b1b9a6485dfed1de61169c9cf44/xxlarge.jpg?1738157591</ThumbnailUrl>
  <ThumbnailUrl size="xlarge">https://assets4-dev.my.umbc.edu/system/shared/thumbnails/news/000/146/825/3def1b1b9a6485dfed1de61169c9cf44/xlarge.jpg?1738157591</ThumbnailUrl>
  <ThumbnailUrl size="large">https://assets2-dev.my.umbc.edu/system/shared/thumbnails/news/000/146/825/3def1b1b9a6485dfed1de61169c9cf44/large.jpg?1738157591</ThumbnailUrl>
  <ThumbnailUrl size="medium">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/146/825/3def1b1b9a6485dfed1de61169c9cf44/medium.jpg?1738157591</ThumbnailUrl>
  <ThumbnailUrl size="small">https://assets4-dev.my.umbc.edu/system/shared/thumbnails/news/000/146/825/3def1b1b9a6485dfed1de61169c9cf44/small.jpg?1738157591</ThumbnailUrl>
  <ThumbnailUrl size="xsmall">https://assets2-dev.my.umbc.edu/system/shared/thumbnails/news/000/146/825/3def1b1b9a6485dfed1de61169c9cf44/xsmall.jpg?1738157591</ThumbnailUrl>
  <ThumbnailUrl size="xxsmall">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/146/825/3def1b1b9a6485dfed1de61169c9cf44/xxsmall.jpg?1738157591</ThumbnailUrl>
  <PawCount>0</PawCount>
  <CommentCount>0</CommentCount>
  <CommentsAllowed>true</CommentsAllowed>
  <PostedAt>Wed, 29 Jan 2025 08:55:54 -0500</PostedAt>
</NewsItem>
  <NewsItem contentIssues="false" id="146646" important="false" status="posted" url="https://dev.my.umbc.edu/groups/csee/posts/146646">
    <Title>Prof. Naghmeh Karimi funded to study computing-in-memory AI accelerators</Title>
    <Body>
      <![CDATA[
          <div class="html-content"><p><a href="https://userpages.cs.umbc.edu/nkarimi/" rel="nofollow external" class="bo"><strong><span>Naghmeh Karimi</span>,</strong></a> an associate professor in the Department of Computer Science and Electrical Engineering, was recently granted more than $300,000 in funding from the <a href="https://www.src.org/" rel="nofollow external" class="bo"><strong>Semiconductor Research Corporation</strong></a> (SRC) to study the security of promising hardware components that speed up the computing process. </p><p>SRC brings together technology companies, academics, and government agencies to tackle large scientific and technical challenges, and Karimi’s research will be funded by three leading technology companies: IBM-Research, AMD, and Siemens. </p><p>Karimi and her team, including graduate students and a collaborator from Arizona State University, will study computer chips whose design and structure allows <strong>computing-in-memory (CiM)</strong>, where data processing happens directly within the computer’s memory. CiM architectures are promising for speeding up the use of machine learning algorithms because they consume less energy.</p><p>Different types of CiM devices (such as RRAM, MRAM, and SRAM) each have their own strengths and weaknesses in terms of performance, power use, and size, and to get the best results, engineers need to combine different CiM devices into one system. Building these systems in 3D layers can further improve their efficiency and performance. However, the security of these 3D architectures has received little attention to date. </p><img width="1200" height="730" src="https://umbc.edu/wp-content/uploads/2025/01/Karimi-figure-1200x730.png" alt="Schematic shows layers of computing elements." style="max-width: 100%; height: auto;">Karimi and her team will study the security of computing-in-memory architectures, as shown in this project overview. (Image courtesy of Karimi)<p><br></p><p>Karimi’s team will study the security of 3D CiM technologies used in AI applications. In particular, the research will focus on evaluating the security vulnerabilities of the technologies and developing mitigation strategies. </p><p>“I’m excited about this project because the topic is very timely,” says Karimi. “The support from three leading companies in the AI field shows the importance of the problem and the promise of the solutions we are working on.”</p><p>The team aims to enhance the security of CiM-based AI accelerators against physical attacks that adversaries might launch to leak sensitive data or induce malfunctions. The researchers will work closely with the funding companies over the next three years in this area.</p><p>This post was written by <a href="https://umbc.edu/author/cmeyers2/" rel="nofollow external" class="bo">Catherine Meyers</a> and originally published online <a href="https://umbc.edu/quick-posts/naghmeh-karimi-ai-accelerators/" rel="nofollow external" class="bo">here</a>.</p></div>
      ]]>
    </Body>
    <Summary>Naghmeh Karimi, an associate professor in the Department of Computer Science and Electrical Engineering, was recently granted more than $300,000 in funding from the Semiconductor Research...</Summary>
    <Website>https://umbc.edu/quick-posts/naghmeh-karimi-ai-accelerators/</Website>
    <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/146646/guest@my.umbc.edu/9be8707fe9fea421fbc627709b9832de/api/pixel</TrackingUrl>
    <Tag>ai</Tag>
    <Tag>funding</Tag>
    <Tag>research</Tag>
    <Group token="csee">Computer Science and Electrical Engineering</Group>
    <GroupUrl>https://dev.my.umbc.edu/groups/csee</GroupUrl>
    <AvatarUrl>https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
    <AvatarUrl size="original">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/original.png?1314043393</AvatarUrl>
    <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxlarge.png?1314043393</AvatarUrl>
    <AvatarUrl size="xlarge">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xlarge.png?1314043393</AvatarUrl>
    <AvatarUrl size="large">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/large.png?1314043393</AvatarUrl>
    <AvatarUrl size="medium">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/medium.png?1314043393</AvatarUrl>
    <AvatarUrl size="small">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/small.png?1314043393</AvatarUrl>
    <AvatarUrl size="xsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
    <AvatarUrl size="xxsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxsmall.png?1314043393</AvatarUrl>
    <Sponsor>Computer Science and Electrical Engineering</Sponsor>
    <PawCount>2</PawCount>
    <CommentCount>0</CommentCount>
    <CommentsAllowed>true</CommentsAllowed>
    <PostedAt>Tue, 21 Jan 2025 16:27:28 -0500</PostedAt>
  </NewsItem>
  <NewsItem contentIssues="false" id="146149" important="false" status="posted" url="https://dev.my.umbc.edu/groups/csee/posts/146149">
  <Title>CodeBot '25: Can We Trust AI-Generated Code? 2/25-26</Title>
  <Tagline>Workshop Feb. 25-26, 2025 in Columbia, MD and online</Tagline>
  <Body>
    <![CDATA[
    <div class="html-content"><div><div><strong><br></strong><h3><strong>Can We Trust AI-Generated Code?</strong></h3></div><h5><strong>Workshop sponsored by UMBC &amp; Army Research Laboratory</strong></h5><h5><span>Feb. 25-26, 2025 </span><span>UMBC Training Centers, Columbia, MD &amp; online<br></span><br></h5><h4><span>  </span><span>Submit Position papers by </span><strong>January 20, 2025</strong></h4><br>The era of generative AI is upon us, and chatbots such as chatGPT are being used by programmers at all levels of experience to produce code.  Some generative AI systems, such as <a href="https://cloud.google.com/gemini/docs/codeassist/overview" rel="nofollow external" class="bo"><strong>Gemini Code Assist</strong></a>, specialize in code generation.  Unfortunately, AI-generated code often contains errors in the form of functionality that fails to meet specifications or vulnerabilities that can be exploited by hackers.  People have been working on program verification and secure coding for sixty years, but even so, the skill needed to find such errors is possessed by only a fraction of software engineers, and these skills are not being passed on to student programmers as they should be.<br><br>The goal of this FREE workshop is to gather and produce actionable ideas and suggestions that may be of use to the IT profession.  The workshop will consist of invited speakers, panels, and open discussion. </div><div><br></div><div><strong>We invite would-be participants to submit short position papers offering comments, observations, experiences, and suggestions that pertain to any or all of the following workshop themes:</strong><br><ol><li>What is or could be done to make AI-generated code more trustworthy, from the perspective of functionality and/or cybersecurity?</li><li>How can we do better at instilling the ideas and tools of secure development into the software profession?</li><li>Being able to produce quality code, with or without the aid of AI, seems to be related to system skills in general. How can we do better at giving students these skills before (or as) they enter the workplace?</li></ol>Position papers should limited to three pages using this <a href="https://docs.google.com/document/d/11nr-Zy2MPObMYihN2x_v2jS7EcUkOLXm/edit?usp=sharing&amp;ouid=117342243438066964240&amp;rtpof=true&amp;sd=true" rel="nofollow external" class="bo"><strong>template</strong></a> and submitted by email to <a href="mailto:codebot25@umbc.edu" rel="nofollow external" class="bo"><strong>codebot25@umbc.edu</strong></a>. </div><div><br></div><div>The organizing committee will select several papers for live presentation at the workshop. Selection will be based on relevance to the workshop themes, technical merit, and perceived interest to the audience.  Position papers that are mere marketing pieces will not be considered, but descriptions of hardware and software solutions tying into the themes described above are welcome. Limited travel support may be available for non-local speakers. Position papers and summaries of the discussions that follow will make up the core of the workshop report.</div><div><br>UMBC students, both graduate or undergraduate, are welcome to submit position papers that describe their own personal experience and observations with AI-generated code in their own words.  Students may include their resumes with position papers if they wish to have their work/resume circulated to other attendees.  Domestic and international students are welcome to participate in this workshop.<br><br><strong>Important Dates:</strong><br>  Position paper submission deadline: <strong>January 20, 2025</strong></div><div>  P̶o̶s̶i̶t̶i̶o̶n̶ p̶a̶p̶e̶r̶ s̶u̶b̶m̶i̶s̶s̶i̶o̶n̶ d̶e̶a̶d̶l̶i̶n̶e̶:̶ J̶a̶n̶u̶a̶r̶y̶ 7̶, 2̶0̶2̶5̶<strong><br></strong>  Notice of acceptance: January 31, 2025<br>  Registration deadline: February 18, 2025<br>    (no registration fee, but space is limited)<br>  Workshop dates: February 25-26, 2025<br><br>The workshop will take place at <strong><a href="https://www.umbctraining.com/" rel="nofollow external" class="bo">UMBC Training Centers</a></strong>, 6996 Columbia Gateway Dr #100, Columbia, MD 21046</div><div><br></div><div><strong>REGISTER </strong>@ <a href="https://forms.gle/CipmPbbBVBLfHc728" rel="nofollow external" class="bo"><strong>https://forms.gle/CipmPbbBVBLfHc728</strong></a><br><br><strong>In-person space is limited, so register early! Based on RSVPs received, the organizing committee reserves the right to be selective in whom it selects to join the in-person meeting.</strong></div><div><br>Instructions for virtual participation will be made available prior to the workshop.<br><br><strong>Organizing Committee:</strong><br>  Prajna Bhandary, UMBC<br>  Mike De Lucia, Army Research Laboratory<br>  Richard Forno, UMBC<br>  Lindsay Gaughan, UMBC Training Centers<br>  Cynthia Matuszek, UMBC<br>  Charles Nicholas, UMBC<br>  Steve Simske, Colorado State University<br>  Larry Wagoner, Dept. of Defense<br>  Linda Kidder Yarlott, UMBC<br>  Paul Yu, Army Research Laboratory<br><br></div><div>Questions? Send email to <a href="mailto:codebot25@umbc.edu" rel="nofollow external" class="bo"><strong>codebot25@umbc.edu</strong></a></div></div>
]]>
  </Body>
  <Summary>Can We Trust AI-Generated Code?   Workshop sponsored by UMBC &amp; Army Research Laboratory  Feb. 25-26, 2025 UMBC Training Centers, Columbia, MD &amp; online      Submit Position papers...</Summary>
  <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/146149/guest@my.umbc.edu/b7b522465d0ebaf0fef24fba8865b2c5/api/pixel</TrackingUrl>
  <Tag>ai</Tag>
  <Tag>code</Tag>
  <Tag>genai</Tag>
  <Tag>llm</Tag>
  <Tag>programming</Tag>
  <Tag>trust</Tag>
  <Group token="csee">Computer Science and Electrical Engineering</Group>
  <GroupUrl>https://dev.my.umbc.edu/groups/csee</GroupUrl>
  <AvatarUrl>https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="original">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/original.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="xlarge">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="large">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/large.png?1314043393</AvatarUrl>
  <AvatarUrl size="medium">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/medium.png?1314043393</AvatarUrl>
  <AvatarUrl size="small">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/small.png?1314043393</AvatarUrl>
  <AvatarUrl size="xsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxsmall.png?1314043393</AvatarUrl>
  <Sponsor>UMBC and Army Research Laboratory</Sponsor>
  <PawCount>0</PawCount>
  <CommentCount>0</CommentCount>
  <CommentsAllowed>true</CommentsAllowed>
  <PostedAt>Mon, 09 Dec 2024 07:57:55 -0500</PostedAt>
  <EditAt>Tue, 07 Jan 2025 15:13:56 -0500</EditAt>
</NewsItem>
  <NewsItem contentIssues="true" id="145964" important="false" status="posted" url="https://dev.my.umbc.edu/groups/csee/posts/145964">
  <Title>Talk: Privacy-Preserving Data Sharing in Intrusion Detection Systems, 12/6 online</Title>
  <Tagline>12&#8211;1pm EST Friday, December 6, 2024, online</Tagline>
  <Body>
    <![CDATA[
    <div class="html-content"><span><h5><span><strong>UMBC Cyber Defense Lab presents</strong></span><span> </span></h5><h4><span>Privacy-Preserving Data Sharing in Intrusion Detection Systems</span></h4><h5><span><strong>Zhiyuan Chen<br></strong></span><span><strong>Professor and Chair, UMBC Information Systems Department</strong></span></h5><h5><strong><span>12:00–1pm, Friday, December 6, 2024, </span><a href="https://umbc.webex.com/meet/sherman" rel="nofollow external" class="bo"><span>online</span></a></strong></h5><div><br></div><p><span>Intrusion detection systems increasingly use machine learning methods, which require large volumes of data to be effective. Sharing such data sets will benefit the research community and industry. One obstacle to sharing such data is data privacy because network trace data or server log data often contains sensitive information, such as IP addresses. Even if IP addresses are encrypted, adversaries may still inject packets with unique patterns (e.g., with a certain packet sizes) such that they can use these packets to infer encrypted information. Another challenge arises when multiple intrusion detection systems from multiple organizations need to correlate their detected alerts to identify a larger threat, but the information they exchange may contain sensitive information such as network topology and traffic. This talk covers two approaches to address this problem. First, we propose a data anonymization approach that de-identifies network trace data. Compared to existing approaches, this approach provides stronger privacy protection and is robust to injection attacks. Second, we propose two privacy-preserving distributed alert correlation methods, one using additive secret sharing and the other using differential privacy. We also investigate tradeoffs between these two methods.</span></p><p><a href="https://userpages.umbc.edu/~zhchen/" rel="nofollow external" class="bo"><span><strong>Dr. Zhiyuan Chen</strong></span></a><span> is a Professor in the Department of Information Systems at UMBC. He received a BS and a MS from Fudan University, China, and a PhD in Computer Science from Cornell University. His research covers the areas of data science, big data, privacy preserving data mining and data management, data exploration and navigation, and semantic-based search and data integration using semantic networks, adversarial learning and its applications in cybersecurity. He has published extensively in these areas and has received funding from NSF, Department of Energy, IBM, Office of Naval Research, MITRE, and Department of Education.</span></p><p><span>Host: <a href="https://www.csee.umbc.edu/people/faculty/alan-t-sherman/" rel="nofollow external" class="bo">Alan T. Sherman</a>. Support for this event was provided in part by NSF under SFS grant DGE-1753681. The UMBC Cyber Defense Lab meets biweekly Fridays 12-1pm. All meetings are open to the public.</span></p><div><span><br></span></div></span></div>
]]>
  </Body>
  <Summary>UMBC Cyber Defense Lab presents   Privacy-Preserving Data Sharing in Intrusion Detection Systems  Zhiyuan Chen Professor and Chair, UMBC Information Systems Department  12:00–1pm, Friday, December...</Summary>
  <Website>https://cisa.umbc.edu/</Website>
  <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/145964/guest@my.umbc.edu/94617ce0bf7f07ceb790e94bcd1d459b/api/pixel</TrackingUrl>
  <Tag>ai</Tag>
  <Tag>cybersecurity</Tag>
  <Tag>privacy</Tag>
  <Group token="csee">Computer Science and Electrical Engineering</Group>
  <GroupUrl>https://dev.my.umbc.edu/groups/csee</GroupUrl>
  <AvatarUrl>https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="original">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/original.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="xlarge">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xlarge.png?1314043393</AvatarUrl>
  <AvatarUrl size="large">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/large.png?1314043393</AvatarUrl>
  <AvatarUrl size="medium">https://assets1-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/medium.png?1314043393</AvatarUrl>
  <AvatarUrl size="small">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/small.png?1314043393</AvatarUrl>
  <AvatarUrl size="xsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
  <AvatarUrl size="xxsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxsmall.png?1314043393</AvatarUrl>
  <Sponsor>UMBC Cyber Defense Lab</Sponsor>
  <ThumbnailUrl size="xxlarge">https://assets1-dev.my.umbc.edu/system/shared/thumbnails/news/000/145/964/5e5ade707b12bec4b3c06ce7bc10de65/xxlarge.jpg?1732903199</ThumbnailUrl>
  <ThumbnailUrl size="xlarge">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/145/964/5e5ade707b12bec4b3c06ce7bc10de65/xlarge.jpg?1732903199</ThumbnailUrl>
  <ThumbnailUrl size="large">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/145/964/5e5ade707b12bec4b3c06ce7bc10de65/large.jpg?1732903199</ThumbnailUrl>
  <ThumbnailUrl size="medium">https://assets2-dev.my.umbc.edu/system/shared/thumbnails/news/000/145/964/5e5ade707b12bec4b3c06ce7bc10de65/medium.jpg?1732903199</ThumbnailUrl>
  <ThumbnailUrl size="small">https://assets4-dev.my.umbc.edu/system/shared/thumbnails/news/000/145/964/5e5ade707b12bec4b3c06ce7bc10de65/small.jpg?1732903199</ThumbnailUrl>
  <ThumbnailUrl size="xsmall">https://assets4-dev.my.umbc.edu/system/shared/thumbnails/news/000/145/964/5e5ade707b12bec4b3c06ce7bc10de65/xsmall.jpg?1732903199</ThumbnailUrl>
  <ThumbnailUrl size="xxsmall">https://assets4-dev.my.umbc.edu/system/shared/thumbnails/news/000/145/964/5e5ade707b12bec4b3c06ce7bc10de65/xxsmall.jpg?1732903199</ThumbnailUrl>
  <PawCount>0</PawCount>
  <CommentCount>0</CommentCount>
  <CommentsAllowed>true</CommentsAllowed>
  <PostedAt>Fri, 29 Nov 2024 13:43:37 -0500</PostedAt>
  <EditAt>Fri, 29 Nov 2024 15:50:06 -0500</EditAt>
</NewsItem>
</News>
