
Zhila Aghajari – is a Ph.D. student in the Department of Computer Science and Engineering at Lehigh University and a member of the Social Design Lab. Her current research broadly focuses on leveraging the impacts of social norms to promote more pro social behavior online. In one of her recent works, she is exploring different mechanisms by which social norms can influence the assessment of and response to hateful comments on online communities. This project is aimed at designing interventions around social norms to promote more desired and civil discussions online. In another project, she explores the influence of social norms on individuals’ responses to misleading views. This project is aimed at designing interventions that underscore social norms to protect vulnerable people from engaging in the further spread of misleading views.
Interest Statement AI-mediated communication (AI-MC) is growing rapidly in everyday interactions. Examples include the involvement of AI in text-based messaging such as smart replies in chat applications, to more rich communication media where AI filters now allow individuals to change their appearances in live conversations. However, little is known about the effects of AI-mediated communication on interpersonal relationships. Various complexities make AI-MC research difficult to carry out. My participation in this workshop will bring to attention some of the methodological challenges in studying AI-Mediated communication. I will discuss the need for innovative approaches to address the methodological challenges in AI-MC research.

Hanan Aljasim – is a Ph.D. candidate in the Department of Computer Science and Engineering (SECS) at Oakland University and a member of the Oakland HCI lab. Her current research studies and designs for women’s safety in mobile social matching apps that connect users for rapid face-to-face encounters. She puts a particular focus on online-to-offline interaction, the risks of harm that arise in this process, and the potential for AI to maintain user safety.
Interest Statement I am interested in inspiring safety-conscious designs for the future of opportunistic social matching systems and other apps that scaffold online-to-offline interactions for young women. Opportunistic social matching systems aim to match people for immediate social interaction whenever the opportune context arises. Current apps like Tinder and Bumble are rapidly approaching the vision of near-immediate interaction for various interaction goals. Such apps also expose users to online-to-offline risks like sexual violence, and I believe AI can be better leveraged in these systems to support user safety and reduce risk of harm. I hope that MOSafely, including its community and platforms, is the best place for this research direction.

Dominic DiFranzo – is an Assistant Professor of Computer Science and Engineering at Lehigh University. His research in human computer interaction translates established social science theories into design interventions that encourage social media users to stand up to cyberbullies, fact check fake news stories, and engage in other prosocial actions. In his effort to better implement and test these design interventions, he’s also developed new experimental tools and methods that create ecologically valid social media simulations, giving researchers control of both the technical interface and social situations found on social media platforms. His research has been published in numerous conferences and journals, including the ACM CHI Conference, the ACM CSCW Conference, the International World Wide Web Conference, and the ACM Web Sci Conference.
Interest Statement I’m interested in exploring the methodological and practical challenges in design and testing online community interventions. How do we better link observational studies that explore what is actually happening online with experimental studies that allow us to find causal relationships? How do we use these findings to design and evaluate new tools, interventions, community governance structures, etc to help build safer spaces online? How do we as researchers collaborate with platforms and the communities that form on them?

Belén C Saldías Fuentes – is a Ph.D. Student at the MIT Center for Constructive Communication, working on the intersection of natural language processing and community-values, specifically interested in child-aware content and the long-term effects of children and various communities interacting with and being influenced by online AI systems. More recently, she’s been interested in rationalizing content moderation and exploring interpretability techniques for such methods. Belén holds a master’s degree in Computer Science from P. Universidad Católica de Chile, for which she pursued her research thesis at the Institute for Applied Computational Science at Harvard. Belén has also worked in the industry, including two years in Falabella .COM, where she led and implemented the company’s first in-house machine learning systems. More lately, she has worked as a research intern at Google Research, focusing on evaluating language translation models. Born and raised in the central-south regions of Chile, she currently resides in Boston.
Interest Statement I see this workshop as a crucial opportunity to learn about what others understand by “Mak[ing] The Internet A Safer Place For Youth,” learn about their approaches and grow my network of people working towards this mission. Last year I held a workshop breakout session where we discussed Towards Child-Aware Machine Learning with a Focus on NLP Challenges and Applications (https://github.com/bcsaldias/icml-wiml-child-aware-ml). This year, inspired by Professor Veronica Barassi’s most recent book—titled “Child | Data | Citizen: How Tech Companies are Profiling Us from before Birth”—I have increasingly started to bring ethical considerations and opportunities to my technical work in natural language processing. My main children-computer interaction field deployment is INSPIRE (http://inspire.media.mit.edu), a mentorship-like program that interfaces middle-school students and mentors through a dialog agent to expose them to role models to help enhance metacognitive traits.

Gionnieve Lim – Gionnieve Lim is a Ph.D. student at the Singapore University of Technology and Design. Her research focuses on misinformation and the use of interface interventions such as labels, warnings, and visualisations to address the issue. Her recent work investigates user interactions with explainable artificial intelligence interfaces in the context of fact checking, seeking to understand how various explanations are perceived by users and what considerations should be made to improve their design.
Interest Statement As AI becomes more pervasive on the Internet, particularly in social media, numerous ethical problems have surfaced due to the callous application of the technology. As such, there has been a turn towards HCAI seeking to incorporate human values and sensibilities in the design of AI technologies to foster more considerate and sustainable solutions. In the workshop, it will be interesting to speak with various practitioners that offer their unique angle on HCAI and to participate in activities that can bring these people and perspectives together

Sang Won Lee – is an assistant professor in the Department of Computer Science at VirginiaTech. His research aims to create social computing systems that facilitate empathy among users and collaborators. His research vision of computer-mediated empathy comes from his background in computer music, thriving to bring the expressive, collaborative, and empathic nature of music to computational systems. In one end, from an empathizer’s perspective, he creates interactive systems that can facilitate understanding by providing ways in computer-mediated communication to share targets’ perspectives and transfer their context. From the other end, from the target’s perspective, he uses technologies for users to better express their intention and emotion and to ground communication. He has applied these approaches to various applications in creative domains, including music-making, design, writing, and programming. He has been an active author in top-tier human-computer interaction venues, like CHI, CSCW, UIST, and C&C.
Interest Statement To create a safe online environment for adolescents, one common approach is to use intelligent algorithms that often aim to detect online risk behaviors (e.g., cybergrooming) in social media and online platforms. However, often users have to sacrifice their private information (e.g., text messages) with an intelligent system. The tension between adolescents’ privacy and their security can certainly impede the deployment of such algorithms widely. In the meantime, we can complement this approach by educating potential target users-adolescents.However, an authentic and effective prevention program may challenge them to be engaged in uncomfortable discourse or to be exposed to explicit content. Such challenges can raise ethical issues as being involved in such a program may trigger their traumatic experiences from their past. In this workshop, I am interested in the latter approach, hoping to learn the challenges involved in designing, developing, and deploying socio-technical interventions for cybergrooming.

Jinkyung Park – I am a Ph.D. candidate in the School of Communication and Information at Rutgers University. My research focuses on finding ways to prevent/reduce aggressive online interactions such as cyberbullying and online incivility. For my dissertation project, I designed and implemented an online experiment to examine whether the positive background embedded in an online discussion forum would be effective in reducing uncivil conversations. My secondary area of study is fairness in the machine learning algorithm. I study theoretical and empirical evidence to support the idea of fair decisions made by various machine learning algorithms such as mental health detection algorithms, cyberbullying detection algorithms, and misinformation detection algorithms. In addition, I worked on a research project regarding changes in privacy perception during the COVID-19 pandemic. Particularly, I looked at how individuals balance privacy concerns and global goods to slow down the spread of COVID-19.
Interest Statement Attending the MOSafely workshop will be a great opportunity for me to engage in community building toward youth online safety. Particularly, I am interested in defining and quantifying risks (e.g., cyberbullying) to have valid ground truth for machine learning systems. I would like to address the questions of how we can refine the current definitions of cyberbullying(that are mostly based on school bullying literature)and consequently, advance the cyberbullying detection technologies that are built upon a youth-centered definition of cyberbullying. I would also like to have a conversation regarding the possibilities of (un)expected bias that could be embedded in or emerge from multi-modal online risk detection algorithms and ways to mitigate such bias through the workshop. The MOSafely workshop will be an important enabler that will support my growth into a researcher and a member of the research community to build the Internet a safe place for youth.

Devansh Saxena – is a Ph.D. candidate in the Department of Computer Science at Marquette University and a member of the Social and Ethical Computing Research Lab. His research interests including investigating and developing algorithmic systems used in the public sector, especially the Child-Welfare System. His current research examines collaborative child-welfare practice where decisions are mediated by policies, practice, and algorithms and seeks to map out the power, politics, and economics of data infrastructures employed in child-welfare. His work is driven by Human-Centered Data Science because it is imperative to understand the context in which (and how) the data is collected about children and families, how this data is used to produce algorithmic outcomes, as well as the social impact of such algorithmic tools on case workers and communities.
Interest Statement Research in HCI has continued to focus on participatory and human-centered design methodologies for building technologies that support the needs of foster youth. However, the child-welfare system is a complicated socio-political domain with several different stakeholders involved (for e.g., birth parents, foster parents, foster youth, caseworkers, legal parties, policymakers) with competing interests. It is necessary to engage in collaborative discussions involving academics, practitioners, and policymakers and examine the ethical and legal challenges in designing pragmatic systems that offer utility. I bring to this workshop my experiences at a child-welfare agency where I conducted a two-year ethnography. Caseworker soften surveilled foster youth’s social media to figure out where and who they were interacting with. Caseworkers shared their frustrations about foster youth’s social media use and how safety measures put in place did not adequately account for social media platforms where much of the unsafe and risky interactions were taking place.

Ben Stein – A former musician, software engineer, and moonlight DJ, I tamed my eclectic career urges in 2019 and became a PhD student in Information Science at the University of Pittsburgh. I’m advised by Dr. Rosta Farzan. My research focuses on how computing technologies can create, describe, and foster human relationships. Recently, I’ve been applying this interest to adolescent online safety: how it’s mediated by the relationship between parent and child and how well and how poorly closely modern interventions serve the needs of parents and children. As a software engineer, I’ve worked for industry leading companies in finance and in medicine, serving in multiple technical roles as a senior engineer. I’m also an active contributor to the open-source community, with packages published in multiple software ecosystems and contributions to open source projects, including Microsoft’s Visual Studio Code. But if all that doesn’t work out, maybe I’ll give that DJ thing another try.
Interest Statement As a participant in the initial meeting of MOSafely, I’m excited to begin the conversation around how we can leverage the tools we have to better serve our youth as they take initial steps into an online world. They face real threats cyberbullying, exposure to disturbing content, solicitation by strangers, and more with grave consequences. However, research has also demonstrated how valuable participation in a connected world can be. It provides exposure to diverse perspectives, promotes emotional resilience, and empowers them to solve the unique problems that they face. In this matter, it is clearly better to light a candle to curse the darkness. At the workshop, I look forward to discussing how we can leverage the power of computing techniques like AI to do just that. I hope to incorporate the insights that we share into my future work in intervention design and to offer ideas from my work that might support the work of my collaborators as well.

Daricia Wilkinson – is a PhD candidate at Clemson University in the Human-Centered Computing program. Her research lies at the intersection of Human-Computer Interaction and recommender systems. Her most recent work focuses on identifying design options that promote safe online interactions while applying interdisciplinary methods to investigate barriers to online safety specifically for under-represented communities. During her time at Clemson University, she has collaborated with researchers from IBM Research, NortonLifeLock Research Labs (formerlySymantec), Max Planck Institute, and others. She is a recipient of fellowships from both Facebook and Google. She previously received her bachelor’s degree in Information Systems and Technology from the University of the Virgin Islands, and her master’s degree in Computer Science from Clemson University.
Interest Statement At its core, my research unities two elements important to online safety: the transparency of the underlying mechanisms in online systems and the development of support tools that are grounded in users’ needs. Specifically, I look at developing safety mechanism on social media that are Artificially Intelligent and Inclusive by Design. While prior works have studied elements of safety threats in isolation, I adopt a wider lens to reveal the entangled nature of day-to-day experiences and uncover nuances in safety intentions depending on the harm. In my work, I take steps towards (1) filling in the gap of missing empirical understanding of users’ perceptions of threats and how that is associated with their intentions to engage with support mechanisms, (2) understanding the effectiveness of current approaches to justice, and (3) providing specific recommendations that guide the development of equitable and inclusive safety mechanisms.

Douglas Zytko – is an Assistant Professor in the Department of Computer Science and Engineering at Oakland University. He is also Director of the Oakland HCI Lab, through which he has advised over 20 graduate and undergraduate students from myriad disciplines including human computer interaction, psychology, communication, and educational leadership. The Oakland HCI Lab broadly studies and designs for online to offline safety. Examples include computer mediated consent to sex and sexual violence, safety conscious social matching system design, virtual reality storytelling as a solution to misinformation impacting our physical health, and human AI interaction for protection of marginalized user groups.
Interest Statement The unique combination of computer mediated and face to face communication gives rise to significant risk of harm against youth and young adults and, with it, unique opportunity for AI intervention. I believe MOSafely, as an open source community and platform, is optimal for exploring these risks and potential AI interventions because it can foster unified cross sector and cross-discipline action. A context of online to offline risk that I am focused on is mobile social matching apps such as Tinder, Bumble, and Grindr which are simultaneously becoming a standard way for youth to meet new people as well as a significant sexual violence risk factor. I am interested in how AI can be incorporated into social matching apps for user safety rather than just user discovery, especially through participatory AI design methods that empower at risk users to create AI in their vision of safety.