Abstract: Conflicts frequently occur in online discussions, with consequences ranging from moderate to detrimental for both the individuals and online community. Reddit, the self-proclaimed front page of the internet, has over 330 million active users globally that participate in an array of discussions. Conversations among individuals with competing views often turn hostile, and Reddit moderators must intervene by banning users or censoring posts. While simple and effective, these approaches fail to address the problem until it is too late. This project aims to detect and mediate online conflicts between Reddit users early on by developing a data-driven application. The project is comprised of three components: detection, intervention, and validation. We used natural language processing (NLP) to sense nuances in the language that are indicative of impending conflict by looking at previous posts and analyzing the linguistic cues. We then labeled the comments in the discussion with running averages and mapped the trajectories of the discussion. When impending conflicts are detected, we decisively facilitated intervention through the development of a politeness language generation model to recommend users with less offensive language to use during the discussion.