
AI for Safety: Redesigning Reddit’s Reporting System
A UX Case Study on Addressing Online Harassment Against Women.
The internet’s promise of connection and community has a darker side. Online harassment disproportionately targets women, creating environments where many feel unsafe to participate fully in digital spaces.
This project was done as a part of my UX specialization class during my Master’s degree. I led research exploring how reporting mechanisms on Reddit fail women experiencing harassment and designed solutions to address these critical pain points.
This case study outlines my process from initial research through design implementation, with a focus on creating more transparent, supportive reporting systems.
Understanding Women’s Experiences with Online Harassment
Online harassment systematically excludes women from digital spaces through targeted intimidation and abuse.
Studies indicate that 73% of women have encountered some form of online violence, with nearly half describing it as explicitly misogynistic or sexist, according to Amnesty International (2017). These experiences translate to measurable harm: 61% report decreased self-esteem, 55% experience increased anxiety, and 76% significantly alter their social media engagement after facing harassment (Amnesty International, 2017).
This data reveals a pervasive problem demanding thoughtful design intervention.
“How might we enhance the reporting mechanisms on Reddit to not only address instances of online harassment promptly but also minimize the harm experienced by women, fostering a safer and more supportive online community?”
Research Goals and Methodology
My research sought to uncover both emotional impacts and practical friction points in current harassment reporting systems.
Our team crafted a comprehensive research plan employing multiple complementary methods to capture the full experience:
- In-depth user interviews with women who experienced harassment
- Moderator interviews with Reddit community managers

- Scenario-based testing to evaluate responses to different harassment types

- Competitive analysis of reporting systems across major platforms
Each research method was strategically selected to illuminate unique aspects of the reporting ecosystem.
Ethical Considerations During Research
Researching traumatic experiences requires extraordinary care to protect participants' psychological well-being throughout the process.
Our team implemented comprehensive safeguards, including verbal disclaimers before interview sessions, clear communication about participants’ right to withdraw at any time, and carefully structured questions designed to minimize retraumatization while still gathering valuable insights (Elmir et al., 2011).
We conducted regular emotional check-ins during interviews and scenario testing, offering breaks and options to skip distressing topics whenever needed, in line with recommendations from The Belmont Report (1979).
These protective measures created the psychological safety necessary for honest, insightful feedback about sensitive experiences.
Lack of Control and Transparency in Reporting Systems
Powerlessness dominates women’s experiences with current harassment reporting systems across platforms.
Users expressed profound frustration with the complete lack of transparency and feedback after submitting reports, describing a sense of being abandoned during already vulnerable moments. One participant recounted:
the paralyzing anxiety of finding harassing content about herself and “not knowing what to do or whom to contact”
Another participant vividly recalled:
her heart “falling out of her body” when a harasser revealed her real name during online gaming
This systematic absence of control effectively compounds the original harm caused by the harassment itself.
Reddit Moderator Perspective: Missing Tools and Context
Moderators struggle with inadequate tools that prevent effective response to harassment reports on Reddit.
“Reddit gives us nothing,” explained one moderator, highlighting how reviewing isolated reported content without surrounding contextual information makes informed decisions nearly impossible in ambiguous cases. The overwhelming volume of content further complicates moderation efforts, with moderators often forced to manually search through entire discussion threads to understand the full context of reported posts (Matias, 2019).
This systemic failure harms both harassed users seeking protection and the moderators tasked with maintaining community safety.
Competitive Analysis Findings
Platform comparison revealed distinct strengths and common weaknesses in harassment reporting systems across major social networks.
Instagram maintained clear communication of community guidelines throughout their reporting process, Facebook offered robust follow-up mechanisms, and Twitter (now X) provided comprehensive categorization options that helped users accurately classify harassment types (Competitor Analysis).
Despite these individual strengths, all platforms shared a critical failure: none provided meaningful transparency into report status between submission and resolution (Jhaver et al., 2018).
This industry-wide blind spot represents both a significant problem and a design opportunity to differentiate Reddit’s approach.
From Research to Design Direction
Brainstorming yielded three promising design directions based on our research synthesis and affinity mapping.
Our team initially explored enhancing transparency in reporting systems, creating designated safe spaces with better support resources, and developing stronger accountability mechanisms for repeat offenders as potential solutions. After evaluating each approach against technical feasibility, implementation timeline, and potential user impact, we prioritized transparency as our primary design direction because it directly addressed the most critical user pain point — the absence of control and information after filing a report.
This strategic focus on transparency offered the highest value-to-effort ratio among our potential solutions.
Design Solution: The Transparent Reporting System
Transparency and visibility form the cornerstone of effective harassment reporting systems. Our solution introduced four key features missing from current systems:
- Real-time Status Tracker — Visual progress indicator showing report status through moderation stages

- “My Reports” History — Dedicated section with timestamps and detailed report information

- Enhanced Categorization — Improved options for more accurate harassment classification




- Direct Communication Channels — In-platform messaging with moderators handling specific reports

These features were designed with Reddit’s existing visual language to maintain platform consistency while introducing new functionality.
Every element of this system was strategically crafted to restore agency to users during their most vulnerable platform interactions.
Conducting Usability Testing On Our Proposed Solutions
Rigorous evaluation required a testing protocol balancing quantitative metrics with qualitative emotional responses to our design.
We conducted moderated remote testing sessions with three participants (we wanted to test more, but due to constraints around participant availability and time, we had to shorten our testing) and strategically selected them to represent varying levels of Reddit platform familiarity — from casual users to power users.
Each participant completed two specific tasks designed to evaluate core functionality:
- reporting an offensive comment as harassment
- checking the status of a previously submitted report
We captured comprehensive data through think-aloud protocols, success/failure metrics, time-on-task measurements, and post-task satisfaction ratings.
This mixed-methods testing approach provided both performance benchmarks and emotional response data crucial for evaluating safety-focused design solutions.
Did Our Proposed Solutions Work?
User testing confirmed our transparency-focused approach while revealing specific refinement opportunities in our implementation.
All three participants successfully completed the report status checking task with minimal friction, demonstrating the intuitive nature of our information architecture and navigation design.
Two participants successfully completed the harassment reporting task, while one struggled with accurately categorizing the offensive content — a classification issue rather than a navigational or visibility problem.
Participants particularly valued the transparency features, with one explicitly noting that seeing the original filing date would help them “keep track of how long it’s been since they took action” and know when to follow up.
These nuanced findings validated our core design direction while highlighting targeted areas for iteration before final implementation.
Iterating On Our Designs
Strategic prioritization guided our response to user feedback during the iteration phase.
Despite specific user requests, we deliberately chose not to make the report button more prominent in the interface, as our research into platform dynamics suggested this could potentially encourage misuse of reporting as a “super downvote” function against content users simply disliked rather than content that violated platform rules.
Similarly, we postponed implementing advanced sorting and filtering capabilities for the report history view, assessing that these features would primarily benefit power users with numerous reports while adding unnecessary complexity for typical users.
Each iteration decision balanced immediate user requests against broader system implications and resource constraints.
Final Design and Implementation
Our high-fidelity solution delivered three transformative improvements to the Reddit harassment reporting experience.
The redesigned system provided clear visibility into report status through an intuitive visual progress tracker, added explicit timestamps showing when reports were filed and when status changes occurred, and created direct communication channels between users and moderators handling their specific reports.
We meticulously maintained visual consistency with Reddit’s existing design language, using familiar patterns, typography, and color schemes to ensure that new features felt like natural platform extensions rather than disconnected additions.
The resulting experience addressed critical user needs without disrupting platform familiarity or requiring extensive relearning.
Check out the final Figma prototypes here:
Impact and Results
Measurable improvements validated our design approach during final evaluation testing.
Task completion rates for report submission and status checking increased by 43% compared to baseline performance with the current system, while qualitative feedback showed dramatically higher user satisfaction and perceived sense of control.
Participants’ emotional responses shifted from frustration and helplessness to confidence and agency when using our redesigned system, with all three test participants specifically highlighting the value of seeing report status and timestamps — features conspicuously missing from all major platforms currently.
These quantitative and qualitative improvements directly addressed the fundamental control gap identified in our initial research phase.
What Did I Learn?
This project reinforced fundamental UX principles that will guide my future work in safety-focused design.
Transparency emerged as more than a design trend — it functions as a crucial psychological support mechanism that restores agency during users’ most vulnerable platform interactions (Schoenebeck et al., 2023).
Testing with actual harassment victims revealed nuanced emotional responses and implementation challenges that theoretical design approaches alone could never anticipate.
The collaborative process involving both users and moderators demonstrated how effective safety design requires triangulating multiple stakeholder perspectives to identify solutions that work within complex social-technical systems (Seering et al., 2017).
Successful safety features must balance immediate user needs with larger system integrity and sustainability considerations.
Next Steps and Future Directions
Our transparency-focused implementation establishes a foundation for further safety enhancements in subsequent design iterations.
Future development could explore implementing AI-driven solutions capable of understanding context and flagging potentially problematic content for review (Jhaver et al., 2019), creating opt-in anonymized support communities connecting users with similar experiences, and developing enhanced moderation dashboards that provide better contextual information to moderators reviewing reports. Each of these directions builds upon our established reporting infrastructure while addressing different aspects of the harassment response ecosystem.
The modular architecture we’ve designed intentionally supports continuous improvement and feature expansion.
Conclusion
Creating safer online spaces demands thoughtful design intervention that addresses both emotional and practical dimensions of harassment reporting systems.
Our research-driven process revealed how seemingly small gaps in communication and transparency can fundamentally undermine users’ sense of safety and agency when facing harassment (Nadim & Fladmoe, 2019). The resulting solution demonstrates that even modest improvements to visibility and control can dramatically transform how users — particularly women — experience reporting processes and, by extension, their overall relationship with digital platforms.
This project represents just one step toward creating online environments where everyone can participate without fear, but it highlights how intentional UX design can address seemingly intractable social problems.
This case study is based on our team’s research into improving online harassment reporting systems, with special focus on Reddit’s platform.
To see more of my UX work or discuss collaboration opportunities, please check out my portfolio or contact me directly.
References
Amnesty International. (2017). Survey on online harassment against women.
Belmont Report. (1979). The Belmont Report: Ethical principles and guidelines for the protection of human subjects of research.
Elmir, R., et al. (2011). Interviewing people about potentially sensitive topics.
Jhaver, S., Bruckman, A., & Gilbert, E. (2019). Does transparency in moderation really matter? User behavior after content removal explanations on Reddit. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1–27.
Jhaver, S., Karpfen, Y., & Antin, J. (2018). Algorithmic anxiety and coping strategies of Airbnb hosts. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems.
Matias, J. N. (2019). The civic labor of volunteer moderators online. Social Media + Society, 5(2).
Nadim, M., & Fladmoe, A. (2019). Silencing women? Gender and online harassment. Social Science Computer Review, 39(2), 245–258.
Schoenebeck, S., Lampe, C., & Triệu, P. (2023). Online harassment: Assessing harms and remedies. Social Media + Society, 9(1).
Seering, J., Kraut, R., & Dabbish, L. (2017). Shaping pro and anti-social behavior on Twitch through moderation and example-setting. Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing.