<?xml version="1.0"?>
<News hasArchived="false" page="1" pageCount="1" pageSize="10" timestamp="Sun, 19 Apr 2026 18:15:47 -0400" url="https://dev.my.umbc.edu/groups/cybersecurity/posts.xml?tag=reasoning">
  <NewsItem contentIssues="false" id="157134" important="false" status="posted" url="https://dev.my.umbc.edu/groups/cybersecurity/posts/157134">
  <Title>talk: Asymmetric Responsibility Framing to Deepen Adolescents' Adversarial Reasoning about Phishing</Title>
  <Tagline>12&#8211;1 pm Friday, March 6, 2026 via Webex</Tagline>
  <Body>
    <![CDATA[
    <div class="html-content"><p><strong>The <a href="https://cisa.umbc.edu/" rel="nofollow external" class="bo">UMBC Cyber Defense Lab</a> presents</strong></p><h4>Asymmetric Responsibility Framing to Deepen Adolescents' Adversarial Reasoning about Phishing</h4><h5><strong><a href="https://www.csee.umbc.edu/people/tenure-track-faculty/sanorita-dey/" rel="nofollow external" class="bo">Professor Sanorita Dey<br></a>UMBC CSEE Department</strong></h5><h5>12–1 pm Friday, March 6, 2026 via <a href="https://umbc.webex.com/meet/sherman" rel="nofollow external" class="bo">Webex</a></h5><p>Adolescents regularly navigate digital environments where persuasive tactics, social engineering, and phishing attempts are embedded in everyday communication. While many can recognize obvious scams, they often struggle to explain why a message is manipulative, how tactics unfold over time, or what protective actions should follow. This gap reflects a limitation not only in knowledge, but in adversarial reasoning: the ability to infer intent, anticipate harm, and respond strategically under uncertainty. This project investigates whether asymmetric responsibility framing can deepen adolescents' adversarial reasoning in phishing contexts. We test whether positioning participants as accountable for guiding a vulnerable peer, rather than reasoning independently, reshapes how they analyze and respond to emerging threats. Grounded in theories of accountability and cognitive engagement, we examine how responsibility structures influence the depth and structure of reasoning.</p><p>We developed a staged, dual-conversation simulation modeling gradual phishing escalation. Participants were assigned either to a solo condition, where they independently assessed a suspicious interaction, or to a responsibility condition, where they advised a "buddy" engaged in an unfolding exchange. This design isolates the effect of responsibility framing beyond content exposure. We measured explanation depth, exploit identification accuracy, detection timing, and quality of protective recommendations while accounting for cognitive demand. Findings show that responsibility framing significantly improves explanation quality and protective guidance. These effects persist after controlling for effort and are strongest during gradual escalation, suggesting that accountability reshapes reasoning processes rather than simply increasing engagement. The talk will cover the theoretical framing, experimental design, and implications for AI-mediated cybersecurity education, along with open questions about scaffolding and generalizability to other digital risk domains.</p><p><a href="https://www.csee.umbc.edu/people/tenure-track-faculty/sanorita-dey/" rel="nofollow external" class="bo"><strong>Sanorita Dey</strong></a> is an assistant professor of computer science and electrical engineering at UMBC. Her research sits at the intersection of human-centered AI, STEM education, and ethical computing, with a focus on designing AI systems that meaningfully augment human learning processes. She develops AI-assisted learning environments that support critical thinking, adversarial reasoning, digital risk awareness, and reflective practice in STEM contexts. Her work emphasizes human-centric design principles, integrating empirical methods, experimental evaluation, and sociotechnical analysis to ensure AI tools are pedagogically grounded, ethically responsible, and developmentally appropriate. Across projects spanning cybersecurity education, AI-mediated mentorship, and responsible computing, Dr. Dey investigates how interaction design, accountability structures, and scaffolded dialogue can deepen learning outcomes while preserving learner agency. Her scholarship contributes to advancing equitable and reflective AI integration in K–12 and higher education STEM environments.</p><p>Host: <a href="https://cybersecurity.umbc.edu/alan-sherman/" rel="nofollow external" class="bo">Dr. Alan T. Sherman</a>, <a href="mailto:sherman@umbc.edu">sherman@umbc.edu</a>. Support for this event was provided in part by the NSF under SFS grants DGE-1753681 and 2438185. </p><div><br></div></div>
]]>
  </Body>
  <Summary>The UMBC Cyber Defense Lab presents  Asymmetric Responsibility Framing to Deepen Adolescents' Adversarial Reasoning about Phishing  Professor Sanorita Dey UMBC CSEE Department  12–1 pm Friday,...</Summary>
  <TrackingUrl>https://dev.my.umbc.edu/api/v0/pixel/news/157134/guest@my.umbc.edu/7d6a25922028dc348e524e3f6925f96d/api/pixel</TrackingUrl>
  <Tag>ai</Tag>
  <Tag>phishing</Tag>
  <Tag>reasoning</Tag>
  <Group token="cybersecurity">UMBC Cybersecurity Institute Group</Group>
  <GroupUrl>https://dev.my.umbc.edu/groups/cybersecurity</GroupUrl>
  <AvatarUrl>https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xsmall.png?1734891477</AvatarUrl>
  <AvatarUrl size="original">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/original.png?1734891477</AvatarUrl>
  <AvatarUrl size="xxlarge">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xxlarge.png?1734891477</AvatarUrl>
  <AvatarUrl size="xlarge">https://assets4-dev.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xlarge.png?1734891477</AvatarUrl>
  <AvatarUrl size="large">https://assets2-dev.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/large.png?1734891477</AvatarUrl>
  <AvatarUrl size="medium">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/medium.png?1734891477</AvatarUrl>
  <AvatarUrl size="small">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/small.png?1734891477</AvatarUrl>
  <AvatarUrl size="xsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xsmall.png?1734891477</AvatarUrl>
  <AvatarUrl size="xxsmall">https://assets3-dev.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xxsmall.png?1734891477</AvatarUrl>
  <Sponsor>UMBC Cybersecurity Institute Group</Sponsor>
  <ThumbnailUrl size="xxlarge">https://assets4-dev.my.umbc.edu/system/shared/thumbnails/news/000/157/134/76be67177c12be779ab35ff4492f099a/xxlarge.jpg?1772633125</ThumbnailUrl>
  <ThumbnailUrl size="xlarge">https://assets1-dev.my.umbc.edu/system/shared/thumbnails/news/000/157/134/76be67177c12be779ab35ff4492f099a/xlarge.jpg?1772633125</ThumbnailUrl>
  <ThumbnailUrl size="large">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/157/134/76be67177c12be779ab35ff4492f099a/large.jpg?1772633125</ThumbnailUrl>
  <ThumbnailUrl size="medium">https://assets4-dev.my.umbc.edu/system/shared/thumbnails/news/000/157/134/76be67177c12be779ab35ff4492f099a/medium.jpg?1772633125</ThumbnailUrl>
  <ThumbnailUrl size="small">https://assets2-dev.my.umbc.edu/system/shared/thumbnails/news/000/157/134/76be67177c12be779ab35ff4492f099a/small.jpg?1772633125</ThumbnailUrl>
  <ThumbnailUrl size="xsmall">https://assets4-dev.my.umbc.edu/system/shared/thumbnails/news/000/157/134/76be67177c12be779ab35ff4492f099a/xsmall.jpg?1772633125</ThumbnailUrl>
  <ThumbnailUrl size="xxsmall">https://assets3-dev.my.umbc.edu/system/shared/thumbnails/news/000/157/134/76be67177c12be779ab35ff4492f099a/xxsmall.jpg?1772633125</ThumbnailUrl>
  <ThumbnailAltText>Phishing image</ThumbnailAltText>
  <PawCount>0</PawCount>
  <CommentCount>0</CommentCount>
  <CommentsAllowed>true</CommentsAllowed>
  <PostedAt>Wed, 04 Mar 2026 09:11:01 -0500</PostedAt>
</NewsItem>
</News>
