Permanent Link: Link

Workshop Sunday October 24th, 10:00 AM – 1:00 PM EST.


Click here to see the attendees.

Overview

The goal of this one-day workshop is to begin building an active community of researchers, practitioners, and policy-makers who are jointly committed to leveraging human-centered artificial intelligence (HCAI) to make the internet a safer place for youth. This community will be founded on the principles of open innovation and human dignity to address some of the most salient safety issues of modern-day internet, including online harassment, sexual solicitation, and the mental health of vulnerable internet users, particularly adolescents and young adults. We will partner with Mozilla Research Foundation to launch a new open project named “MOSafely.org,” which will serve as a platform for code library, research, and data contributions that support the mission of internet safety. During the workshop, we will discuss: 1) the types of contributions and technical standards needed to advance the state-of-the art in online risk detection, 2) the practical, legal, and ethical challenges that we will face, and 3) ways in which we can overcome these challenges through the use of HCAI to create a sustainable community. An end goal of creating the MOSafely community is to offer evidence-based, customizable, robust, and low-cost solutions that are accessible to the public for the purpose of youth protection.

Important Dates

  • Participant submissions due: September 20, 2021 (Extended until September 27, 2021)
  • Participants notified of acceptance: September 27, 2021
  • Camera-ready due: October 10, 2021
  • Virtual workshop: October 24, 2021 10:00 AM – 12:05 PM EDT

How to Participate?

Workshop participants are asked to submit a brief statement of interest to ensure that their workshop participation is well-aligned with the workshop goals. Submissions can be structured in multiple ways:

  1. A short bio of each attendee with a statement of motivation/interest for attending the workshop.
  2. An academic position paper (2-4 pages) in the SIGCHI extended abstract format discussing one or more of the workshop themes.
  3. A case study on relevant work that demonstrates a contribution towards HCAI/AI for youth online safety/risk detection.

We also encourage potential attendees to explicitly state their commitment in joining MOSafely as a meaningful contributor that can help build and sustain the open-source community. We encourage submissions that are honest and subversive.

How to Submit?

Workshop papers should be submitted to: mosafelyucf@gmail.com

Submissions will be peer-reviewed by the workshop’s peer committee. Acceptance will be based on the quality of the position paper, relevance and engagement to the workshop themes, as well as the participant’s potential to meaningfully contribute to the workshop discussions and goals.

Notification of Acceptance will be sent out by September 27, 2021. Camera-ready versions will be due October 10, 2021 and will be made available on the workshop website.

NOTE: At least one author of each accepted position paper must attend the workshop. All participants must register for both the workshop and for at least one day of the conference.

Introduction

This workshop seeks to leverage human-centered principles and innovative machine learning and artificial intelligence techniques to keep youth safe online. The general approach of using machine learning to detect online risk behaviors is not new. Yet, the bulk of the innovation in this space stays locked within academic research papers or behind corporate walls. We intend to unlock this potential. We will do this by bringing together a multidisciplinary and multi-organizational group of researchers, industry professionals, clinicians, and civil servants to research, build, evaluate, and bring to market state-of-the-art technologies that detect risk behaviors of youth online and/or their unsafe online interactions with others (e.g., cyberbullying, sexual solicitations and grooming, exposure to explicit content, non-suicidal self-injury, suicidal ideation, and other imminent risks). Our intention is to maximize societal impact by centralizing and making our open-source contributions widely available to the public to address youth online safety directly within the platforms that online risks are most likely to occur. As such, our open-source community building initiative, Modus Operandi Safely (“MOSafely”), will serve multiple end users that include, but are not limited to, social media platforms, youth safety coalitions, and other internet-based intermediaries (e.g., Apple iOS and Android smart devices, multi-player gaming platforms, internet service providers), who desire to proactively protect youth from serious online risks. During our one-day workshop attendees will work together to address the following high-level themes: 1) the types of contributions and technical standards needed to advance the state-of-the art in online risk detection, 2) the practical, legal, and ethical challenges that we will face, and 3) ways in which we can overcome these challenges through the use of Human Centered Artificial Intelligence (HCAI) for online risk detection to create a sustainable community that has the potential to change the world. By addressing these themes together as a community, the goal of this workshop is to start actively building a vibrant ecosystem of contributors that shape and sustain MOSafely’s mission of leveraging open innovation and HCAI for the purpose of protecting youth online.

Background

Depressive symptoms, non-suicidal self-injury, and suicide rates have increased significantly among youth (ages 13-17) and young adults (ages 18-23) in the last decade with recent research suggesting that the rise in social media use is a contributing factor to this negative trend. Indeed, the majority of risks youth are exposed to online occur via social media sites, and the combination of social media and personal digital devices has created a problematic situation by providing unmediated internet access and a potentially dangerous level of “practical obscurity”, or limited visibility, into the risky activities teens engage in online. For the most part, the burden of protecting teens from online risks has traditionally fallen on the shoulders of parents; however, by launching MOSafely, we aim to create a significant societal pivot, where youth online safety becomes a shared responsibility for all, especially the online platforms in which teens encounter risks, thereby serving teens, their families, and society as a whole.

Most commercially available risk detection solutions focus on detecting objectionable or illegal content, as opposed to detecting online risks for the explicitly purpose of safeguarding youth.

However, Facebook and social media platforms have been at the forefront of developing proprietary algorithms to detect risk behaviors, such as suicidal intentions and cyberbullying. While considered the state-of-the-art, these risk detection algorithms are platform specific, opaque as to how they work, not publicly available for reuse, and typically focus on a narrow range of the most serious risk behaviors, ignoring important patterns of risk escalation that could help circumvent severe harm before it occurs. In addition, these approaches do not fully incorporate the perspectives of diverse stakeholders, especially those of victims, resulting in missing implicit references as well as atypical framings of the online risk. Most of the current algorithms for risk detection also suffer from a high number of false positives, making them fairly unusable in real world settings. False positive rates are high because these algorithms often do not leverage human-centered approaches that leverage insights gained from contextual information (e.g., metadata, such as the time of day a message was sent, or whom sent or received the message), patterns of behavior over time (e.g., sexual grooming patterns). These two limitations (i.e., opaqueness and limited availability of proprietary algorithms combined with the lack of human-centeredness in the development process) combined create a compelling case for leveraging open innovation and HCAI to build better solutions for the protection of youth online. The inner workings of proprietary algorithms are hidden from the general public and thus restrict us from understanding how they work and at times stop us from using or creating solutions which utilize their functionality. Open-innovation has at its core the concept of using both internal and external ideas to create solutions, thus we believe that it can aid in solving this issue of proprietary blocks as we desire to work with the community to create open-source algorithmic solutions.Further, the integration of HCAI into this problem space is critical. HCAI looks at the AI and ML algorithms through the lens of humans, advocating that such systems need incorporate socio-cultural understanding of humans as well as help humans understand AI. Therefore, we present the following goals and themes for our workshop.


Workshop Goals

This workshop will serve as the inaugural launch of MOSafely.org, an open-source community that leverages evidence-based research, data, and HCAI to help youth engage more safely online. The name “Modus Operandi Safely” (i.e., MOSafely) stems from our desire to help youth engage “more safely” online. As an open-source initiative, we have partnered with Mozilla to learn from their extensive experience creating open-source solutions. From this partnership we have learned the importance of have being supported by a diverse, committed team, and solidified our desire to work with a community to provide an online risk-detection platform. A strong community-based commitment is key to the success of MOSafely. Thus, the primary
goal of the workshop will be community building. Towards this end, we will bring together a diverse group of researchers, industry professionals, youth service providers, and policy makers who have demonstrated a commitment to the mission of youth online safety and well-being, open innovation, and/or HCAI for youth risk detection in online contexts. We will build upon previous CSCW and CHI workshops, which addressed related themes in keeping with our efforts. Attendees will help us identify key stakeholders, best practices, challenges, and solutions for establishing the MOSafely community as an open source leader in the HCAI community for youth risk detection and online safety by addressing the following workshop themes.

Workshop Themes

Following the rapid growth of social media, youth are increasingly exposed to harmful content and interactions online, ranging from pornography to offensive messages through online communities. Past literature on online risk detection algorithms has adapted approaches from machine learning and natural language processing. Most approaches for sexual risk detection are currently based on traditional ML algorithms while recently researchers utilized deep learning models. Cyberbullying detection studies have also implemented supervised learning techniques; however, obtaining large amounts of data as well as the transparency of these models are recurring challenges of such approach. Currently due to challenges on collecting sensitive data from youth for the purpose of online risk detection, most researchers rely on unrealistic or general data for risk detection. Therefore, establishing ecologically valid training datasets of teen’s digital trace data and defining and quantifying risks for having informed ground truth for machine learning systems are important. We would like to call the community to discuss the technical standards needed to advance the state-of-the-art in online risk detection. This would include but not limited to various techniques that could be implemented to further improve the existing online risk detection systems specifically geared towards youth as well as way to stimulate participation within the community. We raise the following questions:

  1. How can we devise more sophisticated detection approaches that would detect multi-modal online risks through textual, visual, and meta data?
  2. What technical standards are needed for the centralized development of online risk detection algorithms for youth?
  3. What types of contributions (e.g., code libraries, evidence-based research, data sets, etc.) are needed to advance the state-of-the-art in the algorithmic risk detection of youth online risks?

Developing machine learning models for online risk detection entails practical, legal, and ethical challenges that need to be taken into account. Ensuring the protection of the vulnerable populations we are trying to serve is mission critical to our approach. Often, algorithmic research can fall short if not considering the ethical implications of scraping, analyzing, and making classifications based on users’ social media data. When using such data specifically related to youth who are minors, there are numerous aspects that need to be considered such as consent, assent, and reporting incidents of child abuse and/or pornography. In the past decade, social media has amassed a lot of data from youth, but the accessibility of said data has been limited; recently, there have been movement towards making it available. In this theme we want to explore the practical, legal, and ethical challenges we will face using AI in online risk detection for the explicit purpose of protecting youth.

  1. What are the legal and ethical implications of collecting the digital trace data of youth?
  2. How can the community be mobilized to work on detecting risks targeted towards youth online without exposing their data to the entire community? What infrastructure must be in place to safely collect and use teen data for risk detection?
  3. How can bias be avoided in youth online risk detection algorithms?
  4. What are the potential unintended consequences of developing and making widely available
  5. algorithms that detect youth risk behavior online?

It is easy for Computer Scientists to focus on functionality and performance while developing computer algorithms. However, the HCI research community has identified the focus on such metrics without the incorporation of the human context to be unwise. As such, the need to have a human-centered approach to algorithm design has been highlighted in recent literature. There have been issues particularly related to bias, stereotyping, and marginalization in systems using these technologies. Thus, it is important to integrate human-centeredness in the development of MOSafely solutions to ensure transparency, explainability, and accountability. Embedding human-centeredness in the development of the online youth safety system will help ensure the robustness of the open-source tool and also enlarge the potential usages of MOSafely in protecting the safety of youth online. In this theme we want to explore the need to have a Human-centered lens during risk detection algorithm development for youth online risk detection. These contributions include but are not limited to utilizing HCML and HCI methods in different cycles of developing AI systems for risk detection and online safety, dataset creation and design, developing and evaluating the systems, how to create ethical systems that take the most care of youth’s privacy, and how to remove various types of bias from systems, and technical ML contributions. For this theme, we pose the following questions:

  1. How do we incorporate different stakeholders’ perspectives and needs in the outputs of MOSafely?
  2. How do the current algorithm design techniques fall short in being user and stakeholder centered?
  3. What would be the key aspects of human-centeredness in machine learning that we should consider when trying to overcome these limitations?
  4. How can the incorporation of a human-centered viewpoint during algorithm design and development become a focal point in our community moving forward?

Workshop Activities

The workshop will be held over two (2) hours on October 24th, 2021. The structure will lend itself towards discussion on how individual siloed risk detection efforts can come together as a strong community to create

solutions which keep teens safe online. Participants are encouraged identify areas where teens are exposed to risky content online and discuss how we as a community can mitigate this issue. We plan to facilitate 20 participants.

  1. Welcome/Introductions (15 minutes): The organizers will introduce themselves and the mission behind creating the MOSafely open-source community. They will briefly cover logistics, including the workshop schedule and high-level goals of the workshop.
  2. Lightning Talks (30 minutes): Attendees will introduce themselves and briefly present their position or work relevant to the workshop. Lightning talks should loosely align with Theme 1 of the workshop on the types of contributions necessary for advancing the state-of-the art in HCAI for promoting online safety and risk mitigation of youth.
  3. Keynote Speaker (15 minutes): To inspire participants and spark discussion, we will have Temi Popo, Who leads developer-focused strategies around Trustworthy AI at Mozilla and is well-versed in the paradigm of Open Leadership. There will be an opportunity for workshop participants to engage with Ms. Popo in a Q&A session after her keynote.
  4. Break: 10 minute break.
  5. Large Group Discussion (15 minutes): Workshop participants will brainstorm and identify potential challenges that will need to be addressed for creating a sustainable and vibrant community of scholars, practitioners, civil servants, and policy-makers committed to leveraging evidence-based research and advances in HCAI to translate open innovation into real-world practice for protecting youth online. This discussion aligns with Theme 2 of the workshop.
  6. Break-out Activities (15 minutes): Participants will breakout into smaller groups to create actionable solutions and tangible project-plans for tackling these challenges (Theme 3). Potential breakout groups may include: 1) Specific HCAI approaches participants have developed for specific online risk contexts (e.g., harassment, abuse, sexual solicitations, mental health risks, etc.), 2) Important HCAI concerns around trustworthiness, explainability, transparency, and accountability, and 2) Broader around ethical and legal implications of developing such technologies and making them accessible to the general public. Breakout groups will be formed based on emerging themes in submissions of accepted attendees and based on discussions raised during the first half of the workshop.
  7. Reporting Outcomes (15 minutes): Each small group will report back their ideas to the larger group. Participants may use several approaches to communicate their ideas, such developing project proposals, design fictions, mind-maps, or architecture diagrams. Each group will have an opportunity to get feedback from other workshop participants.
  8. Next Steps (10 minutes): The workshop will conclude with the organizers synthesizing the discussions and outcomes from the workshop and brainstorming with attendees on necessary next steps for officially launching MOSafely after the conclusion of the workshop.

Expected Workshop Contributions and Beyond

The expected outcome of the workshop will be a co-created agenda for officially establishing an inaugural community of MOSafely contributors who will play an active role in creating community standards, contributing code libraries and research, as well as taking on other leadership positions that support the community’s mission. After the workshop, the organizers will invite workshop attendees to join the MOSafely open-source community and will report the workshop outcomes in a blog post on the MOSafely.org website. Based on the preference of attendees, we will also create a listserv or forum for community-based organizing and ask workshop attendees to invite individuals from their extended networks to grow the MOSafely community. In terms of long-term outcomes, the MOSafely community will support two inter-related initiatives:

  • An open source project that releases untrained algorithms relevant to youth online risk detection to the public as a way to gain market visibility and broad participation, so that others can train the algorithms with their own data sets and contribute code and expertise to as part of this open source project.
  • A commercial Software as a Service (SaaS) Application Protocol Interface (API) that combines these algorithms into an easy-to-use and accessible service for online risk detection and mitigation.
  • The open source platform will provide typical community building resources, including contribution guidelines, issue tracking, documentation, and development resources. The project will initially be maintained by the workshop organizers with additional contributors gaining administrative roles as they contribute to the mission of MOSafely. Results from research generated by the community and code contributions from the open source project will be used to continuously improve the SaaS API. Ultimately, the MOSafely community will provide these resources to developers and small to mid-sized internet-based companies that cater to youth. Developers may build product solutions by integrating open source code libraries, and online platforms could leverage the MOSafely SaaS API to detect and mitigate online risks that are facilitated through their platforms. Our intention is that this approach will broaden participation and create a shared societal responsibility of keeping youth safe online.

Workshop Co-Organizers

The MOSafely workshop co-organizers are the PI/Co-PIs (and their students) on a National Science Foundation (NSF) Partnerships for Innovation (PFI) grant that funds this initiative.

image

Xavier Caddle – is a PhD student in the Department of Computer Science at the University of Central Florida (UCF) and a member of the Socio-Technical Interaction Research (STIR) Lab. His current research focuses on conducting customer discovery and developing open-source standards and best practices for making MOSafely a sustainable community that leads the efforts for HCAI internet safety for youth.

image

Afsaneh Razi – is a Ph.D. candidate in the Department of Computer Science at UCF and a member of the STIR Lab. Her dissertation research is aimed at improving adolescent online safety by utilizing human-centered insights and machine learning to detect unwanted sexual risk experiences of adolescents. Her recent works highlighted that online sexual experiences have become an irrevocable part of teens’ sexual development and identified the benefits and challenges when youth seek support and receive support for these experiences. In her work she discusses ethical challenges and considers for data collection and development/deployment of adolescent online risk detection AI systems.

image

Seunghyun Kim – is a Ph.D. student in the School of Interactive Computing at the Georgia Institute of Technology and a member of the Social Dynamics and Wellbeing Lab. His research is focused on working to develop human-centered machine learning algorithms to assess online risk (e.g., cyberbullying, harassment, abuse, and self-harm). His recent work highlighted the difference between the perspectives of the stakeholders of cyberbullying and its influence on cyberbullying detection algorithms.

image

Shiza Ali – is a Ph.D. student at Boston University in the ECE Department. She is a member of the Security Lab (SeclaBU). Her research involves analyzing large datasets to understand malicious users online and developing mitigation techniques. Her recent work involves developing tools to reduce cyberbullying and sexual harassment online, specifically when targeted towards teens.

image

Temi Popo – is an open innovation practitioner and creative technologist leading Mozilla’s developer-focused strategy around Trustworthy AI and MozFest. In 2012, she graduated magna cum laude from Mount Holyoke College, where she studied International Relations and Digital Media (computer science and film production). Ms. Popo also holds a Master’s in Digital Experience Innovation from the University ofWaterloo, with professional certification in digital publishing from NYU. She has worked across several industries in the area of Innovation and Strategic Foresight.

image

Gianluca Stringhini – is an Assistant Professor in the ECE Department at Boston University and the Director of the SeclaBU Lab. He is a Senior Personnel for the grant supporting this effort. Dr. Stringhini works in the area of data-driven security, applying computational techniques to make online users safe. For example, he has recently worked on mitigating coordinated online harassment, cyberbullying, and disinformation.

image

Munmun De Choudhury – is an Associate Professor of Interactive Computing at Georgia Tech and the Director of the Social Dynamics and Well-Being Lab. She is the Co-Primary Investigator of the grant supporting this effort. Dr. De Choudhury is best known for her work in laying the foundation of computational and human-centered techniques to responsibly and ethically employ social media in understanding and improving mental health.

image

Pamela Wisniewski – is an Associate Professor in the Department of Computer Science at the University of Central Florida and Director of the STIR Lab. She is the Primary Investigator of the grant supporting this effort. Dr. Wisniewski’s research expertise lies at the intersection of social media, privacy, and online safety for adolescents (ages 13-17). She was one of the first researchers to recognize the need for resilience-based and teen-centric approaches for online safety, rather than an abstinence-based approaches, and to back this stance up with empirical data.

Click here to view our CSCW Attendees

Program Committee Members

The following individuals have confirmed their commitment to serving of the Program Committee should our workshop be accepted. The responsibilities will include reviewing 2-5 position papers/bios with statements of interest of potential workshop attendees, promoting the workshop within their personal networks, and if possible, attend the workshop to meaningfully contribute to the MOSafely mission:
Zahra Ashktorab, Research Staff Member, IBM Research
Jeremy Blackburn, Assistant Professor, Binghamton University
Lindsay Blackwell, Senior Researcher, Twitter
Laura Brown, Senior UX Researcher, Facebook
Rosta Farzan, Associate Professor, University of Pittsburgh
Ana Freire, Researcher and Lecturer, Pompeu Fabra University
Shion Guha, Assistant Professor, University of Toronto
Shirin Nilizadeh, University of Texas at Arlington
Vivek Singh, Associate Professor, Rutgers University
Kathryn Seigfried-Spellar, Associate Professor, Purdue University
Thamar Solorio, Associate Professor, University of Houston
Jacqueline Vickery, Associate Professor, University of North Texas

Acknowledgements

This research is supported by the U.S. National Science Foundation under grant #IIP-1827700. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the U.S. National Science Foundation.



CSCW MOSafely Workshop Attendees

image

Zhila Aghajari – is a Ph.D. student in the Department of Computer Science and Engineering at Lehigh University and a member of the Social Design Lab. Her current research broadly focuses on leveraging the impacts of social norms to promote more pro social behavior online. In one of her recent works, she is exploring different mechanisms by which social norms can influence the assessment of and response to hateful comments on online communities. This project is aimed at designing interventions around social norms to promote more desired and civil discussions online. In another project, she explores the influence of social norms on individuals’ responses to misleading views. This project is aimed at designing interventions that underscore social norms to protect vulnerable people from engaging in the further spread of misleading views.

Interest Statement AI-mediated communication (AI-MC) is growing rapidly in everyday interactions. Examples include the involvement of AI in text-based messaging such as smart replies in chat applications, to more rich communication media where AI filters now allow individuals to change their appearances in live conversations. However, little is known about the effects of AI-mediated communication on interpersonal relationships. Various complexities make AI-MC research difficult to carry out. My participation in this workshop will bring to attention some of the methodological challenges in studying AI-Mediated communication. I will discuss the need for innovative approaches to address the methodological challenges in AI-MC research.


image

Hanan Aljasim – is a Ph.D. candidate in the Department of Computer Science and Engineering (SECS) at Oakland University and a member of the Oakland HCI lab. Her current research studies and designs for women’s safety in mobile social matching apps that connect users for rapid face-to-face encounters. She puts a particular focus on online-to-offline interaction, the risks of harm that arise in this process, and the potential for AI to maintain user safety.

Interest Statement I am interested in inspiring safety-conscious designs for the future of opportunistic social matching systems and other apps that scaffold online-to-offline interactions for young women. Opportunistic social matching systems aim to match people for immediate social interaction whenever the opportune context arises. Current apps like Tinder and Bumble are rapidly approaching the vision of near-immediate interaction for various interaction goals. Such apps also expose users to online-to-offline risks like sexual violence, and I believe AI can be better leveraged in these systems to support user safety and reduce risk of harm. I hope that MOSafely, including its community and platforms, is the best place for this research direction.


image

Dominic DiFranzo – is an Assistant Professor of Computer Science and Engineering at Lehigh University. His research in human computer interaction translates established social science theories into design interventions that encourage social media users to stand up to cyberbullies, fact check fake news stories, and engage in other prosocial actions. In his effort to better implement and test these design interventions, he’s also developed new experimental tools and methods that create ecologically valid social media simulations, giving researchers control of both the technical interface and social situations found on social media platforms. His research has been published in numerous conferences and journals, including the ACM CHI Conference, the ACM CSCW Conference, the International World Wide Web Conference, and the ACM Web Sci Conference.

Interest Statement I’m interested in exploring the methodological and practical challenges in design and testing online community interventions. How do we better link observational studies that explore what is actually happening online with experimental studies that allow us to find causal relationships? How do we use these findings to design and evaluate new tools, interventions, community governance structures, etc to help build safer spaces online? How do we as researchers collaborate with platforms and the communities that form on them?


image

Belén C Saldías Fuentes – is a Ph.D. Student at the MIT Center for Constructive Communication, working on the intersection of natural language processing and community-values, specifically interested in child-aware content and the long-term effects of children and various communities interacting with and being influenced by online AI systems. More recently, she’s been interested in rationalizing content moderation and exploring interpretability techniques for such methods. Belén holds a master’s degree in Computer Science from P. Universidad Católica de Chile, for which she pursued her research thesis at the Institute for Applied Computational Science at Harvard. Belén has also worked in the industry, including two years in Falabella .COM, where she led and implemented the company’s first in-house machine learning systems. More lately, she has worked as a research intern at Google Research, focusing on evaluating language translation models. Born and raised in the central-south regions of Chile, she currently resides in Boston.

Interest Statement I see this workshop as a crucial opportunity to learn about what others understand by “Mak[ing] The Internet A Safer Place For Youth,” learn about their approaches and grow my network of people working towards this mission. Last year I held a workshop breakout session where we discussed Towards Child-Aware Machine Learning with a Focus on NLP Challenges and Applications (https://github.com/bcsaldias/icml-wiml-child-aware-ml). This year, inspired by Professor Veronica Barassi’s most recent book—titled “Child | Data | Citizen: How Tech Companies are Profiling Us from before Birth”—I have increasingly started to bring ethical considerations and opportunities to my technical work in natural language processing. My main children-computer interaction field deployment is INSPIRE (http://inspire.media.mit.edu), a mentorship-like program that interfaces middle-school students and mentors through a dialog agent to expose them to role models to help enhance metacognitive traits.


image

Gionnieve Lim – Gionnieve Lim is a Ph.D. student at the Singapore University of Technology and Design. Her research focuses on misinformation and the use of interface interventions such as labels, warnings, and visualisations to address the issue. Her recent work investigates user interactions with explainable artificial intelligence interfaces in the context of fact checking, seeking to understand how various explanations are perceived by users and what considerations should be made to improve their design.

Interest Statement As AI becomes more pervasive on the Internet, particularly in social media, numerous ethical problems have surfaced due to the callous application of the technology. As such, there has been a turn towards HCAI seeking to incorporate human values and sensibilities in the design of AI technologies to foster more considerate and sustainable solutions. In the workshop, it will be interesting to speak with various practitioners that offer their unique angle on HCAI and to participate in activities that can bring these people and perspectives together


image

Sang Won Lee – is an assistant professor in the Department of Computer Science at VirginiaTech. His research aims to create social computing systems that facilitate empathy among users and collaborators. His research vision of computer-mediated empathy comes from his background in computer music, thriving to bring the expressive, collaborative, and empathic nature of music to computational systems. In one end, from an empathizer’s perspective, he creates interactive systems that can facilitate understanding by providing ways in computer-mediated communication to share targets’ perspectives and transfer their context. From the other end, from the target’s perspective, he uses technologies for users to better express their intention and emotion and to ground communication. He has applied these approaches to various applications in creative domains, including music-making, design, writing, and programming. He has been an active author in top-tier human-computer interaction venues, like CHI, CSCW, UIST, and C&C.

Interest Statement To create a safe online environment for adolescents, one common approach is to use intelligent algorithms that often aim to detect online risk behaviors (e.g., cybergrooming) in social media and online platforms. However, often users have to sacrifice their private information (e.g., text messages) with an intelligent system. The tension between adolescents’ privacy and their security can certainly impede the deployment of such algorithms widely. In the meantime, we can complement this approach by educating potential target users-adolescents.However, an authentic and effective prevention program may challenge them to be engaged in uncomfortable discourse or to be exposed to explicit content. Such challenges can raise ethical issues as being involved in such a program may trigger their traumatic experiences from their past. In this workshop, I am interested in the latter approach, hoping to learn the challenges involved in designing, developing, and deploying socio-technical interventions for cybergrooming.


image

Jinkyung Park – I am a Ph.D. candidate in the School of Communication and Information at Rutgers University. My research focuses on finding ways to prevent/reduce aggressive online interactions such as cyberbullying and online incivility. For my dissertation project, I designed and implemented an online experiment to examine whether the positive background embedded in an online discussion forum would be effective in reducing uncivil conversations. My secondary area of study is fairness in the machine learning algorithm. I study theoretical and empirical evidence to support the idea of fair decisions made by various machine learning algorithms such as mental health detection algorithms, cyberbullying detection algorithms, and misinformation detection algorithms. In addition, I worked on a research project regarding changes in privacy perception during the COVID-19 pandemic. Particularly, I looked at how individuals balance privacy concerns and global goods to slow down the spread of COVID-19.

Interest Statement Attending the MOSafely workshop will be a great opportunity for me to engage in community building toward youth online safety. Particularly, I am interested in defining and quantifying risks (e.g., cyberbullying) to have valid ground truth for machine learning systems. I would like to address the questions of how we can refine the current definitions of cyberbullying(that are mostly based on school bullying literature)and consequently, advance the cyberbullying detection technologies that are built upon a youth-centered definition of cyberbullying. I would also like to have a conversation regarding the possibilities of (un)expected bias that could be embedded in or emerge from multi-modal online risk detection algorithms and ways to mitigate such bias through the workshop. The MOSafely workshop will be an important enabler that will support my growth into a researcher and a member of the research community to build the Internet a safe place for youth.


image

Devansh Saxena – is a Ph.D. candidate in the Department of Computer Science at Marquette University and a member of the Social and Ethical Computing Research Lab. His research interests including investigating and developing algorithmic systems used in the public sector, especially the Child-Welfare System. His current research examines collaborative child-welfare practice where decisions are mediated by policies, practice, and algorithms and seeks to map out the power, politics, and economics of data infrastructures employed in child-welfare. His work is driven by Human-Centered Data Science because it is imperative to understand the context in which (and how) the data is collected about children and families, how this data is used to produce algorithmic outcomes, as well as the social impact of such algorithmic tools on case workers and communities.

Interest Statement Research in HCI has continued to focus on participatory and human-centered design methodologies for building technologies that support the needs of foster youth. However, the child-welfare system is a complicated socio-political domain with several different stakeholders involved (for e.g., birth parents, foster parents, foster youth, caseworkers, legal parties, policymakers) with competing interests. It is necessary to engage in collaborative discussions involving academics, practitioners, and policymakers and examine the ethical and legal challenges in designing pragmatic systems that offer utility. I bring to this workshop my experiences at a child-welfare agency where I conducted a two-year ethnography. Caseworker soften surveilled foster youth’s social media to figure out where and who they were interacting with. Caseworkers shared their frustrations about foster youth’s social media use and how safety measures put in place did not adequately account for social media platforms where much of the unsafe and risky interactions were taking place.


image

Ben Stein – A former musician, software engineer, and moonlight DJ, I tamed my eclectic career urges in 2019 and became a PhD student in Information Science at the University of Pittsburgh. I’m advised by Dr. Rosta Farzan. My research focuses on how computing technologies can create, describe, and foster human relationships. Recently, I’ve been applying this interest to adolescent online safety: how it’s mediated by the relationship between parent and child and how well and how poorly closely modern interventions serve the needs of parents and children. As a software engineer, I’ve worked for industry leading companies in finance and in medicine, serving in multiple technical roles as a senior engineer. I’m also an active contributor to the open-source community, with packages published in multiple software ecosystems and contributions to open source projects, including Microsoft’s Visual Studio Code. But if all that doesn’t work out, maybe I’ll give that DJ thing another try.

Interest Statement As a participant in the initial meeting of MOSafely, I’m excited to begin the conversation around how we can leverage the tools we have to better serve our youth as they take initial steps into an online world. They face real threats cyberbullying, exposure to disturbing content, solicitation by strangers, and more with grave consequences. However, research has also demonstrated how valuable participation in a connected world can be. It provides exposure to diverse perspectives, promotes emotional resilience, and empowers them to solve the unique problems that they face. In this matter, it is clearly better to light a candle to curse the darkness. At the workshop, I look forward to discussing how we can leverage the power of computing techniques like AI to do just that. I hope to incorporate the insights that we share into my future work in intervention design and to offer ideas from my work that might support the work of my collaborators as well.


image

Daricia Wilkinson – is a PhD candidate at Clemson University in the Human-Centered Computing program. Her research lies at the intersection of Human-Computer Interaction and recommender systems. Her most recent work focuses on identifying design options that promote safe online interactions while applying interdisciplinary methods to investigate barriers to online safety specifically for under-represented communities. During her time at Clemson University, she has collaborated with researchers from IBM Research, NortonLifeLock Research Labs (formerlySymantec), Max Planck Institute, and others. She is a recipient of fellowships from both Facebook and Google. She previously received her bachelor’s degree in Information Systems and Technology from the University of the Virgin Islands, and her master’s degree in Computer Science from Clemson University.

Interest Statement At its core, my research unities two elements important to online safety: the transparency of the underlying mechanisms in online systems and the development of support tools that are grounded in users’ needs. Specifically, I look at developing safety mechanism on social media that are Artificially Intelligent and Inclusive by Design. While prior works have studied elements of safety threats in isolation, I adopt a wider lens to reveal the entangled nature of day-to-day experiences and uncover nuances in safety intentions depending on the harm. In my work, I take steps towards (1) filling in the gap of missing empirical understanding of users’ perceptions of threats and how that is associated with their intentions to engage with support mechanisms, (2) understanding the effectiveness of current approaches to justice, and (3) providing specific recommendations that guide the development of equitable and inclusive safety mechanisms.


image

Douglas Zytko – is an Assistant Professor in the Department of Computer Science and Engineering at Oakland University. He is also Director of the Oakland HCI Lab, through which he has advised over 20 graduate and undergraduate students from myriad disciplines including human computer interaction, psychology, communication, and educational leadership. The Oakland HCI Lab broadly studies and designs for online to offline safety. Examples include computer mediated consent to sex and sexual violence, safety conscious social matching system design, virtual reality storytelling as a solution to misinformation impacting our physical health, and human AI interaction for protection of marginalized user groups.

Interest Statement The unique combination of computer mediated and face to face communication gives rise to significant risk of harm against youth and young adults and, with it, unique opportunity for AI intervention. I believe MOSafely, as an open source community and platform, is optimal for exploring these risks and potential AI interventions because it can foster unified cross sector and cross-discipline action. A context of online to offline risk that I am focused on is mobile social matching apps such as Tinder, Bumble, and Grindr which are simultaneously becoming a standard way for youth to meet new people as well as a significant sexual violence risk factor. I am interested in how AI can be incorporated into social matching apps for user safety rather than just user discovery, especially through participatory AI design methods that empower at risk users to create AI in their vision of safety.



Permanent Link: Link