paint-brush
What Are Dating Apps Doing to Protect Their Users?by@propublica
984 reads
984 reads

What Are Dating Apps Doing to Protect Their Users?

by Pro PublicaOctober 17th, 2022
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Natalie Dong, then a 21-year-old engineering student, said she had been raped in her home by a man she had met on a different dating website. She contacted Tinder, Coffee Meets Bagel, Hinge and Tinder to report what happened. All three platforms apologized and informed her they had banned the accused user. Experts say online dating companies routinely fail to anticipate abuse and implement safeguards to prevent it. Despite pledges to shield users from sexual predators, the companies have done little to abide by them.
featured image - What Are Dating Apps Doing to Protect Their Users?
Pro Publica HackerNoon profile picture

This story was originally published on ProPublica by Brian Edwards, Elizabeth Naismith Picciani, Sarah Spicer and Keith Cousins, Columbia Journalism Investigations. This article is co-published with Columbia Journalism Investigations.


On a sunny afternoon in the summer of 2019, Natalie Dong stood outside the glass headquarters of the popular online dating platform Tinder, in downtown Los Angeles, with a poster board draped from her neck. It read: “MY RAPIST IS STILL ON TINDER.”


More than a year earlier, Dong, then a 21-year-old engineering student, said she had been raped in her home by a man she had met on a different dating website, Coffee Meets Bagel.


He told Dong he was on other dating platforms, including Tinder. She reported the events to the police, which didn’t lead to criminal charges.


Dong worried for the female users of these apps. “I realized he was probably still on these dating apps, going out and meeting new women,” she said of her alleged attacker.


She made a list of all the ones he said he used: Bumble, Coffee Meets Bagel, Hinge and Tinder. She contacted each of them to report what happened. “I need there to be consequences,” she said.


Bumble responded to Dong’s complaint in 20 minutes. Hinge got back to her after three days. Coffee Meets Bagel took 11 days to respond. All three platforms apologized and informed her they had banned the accused user.


But when Dong contacted Tinder, the process dragged on for weeks. After several email exchanges with no visible outcome, she sought an update on the company’s response.


“He is dangerous,” she wrote in a May 21, 2019, email, “and your women users deserve to be kept safe.”


The multibillion-dollar online dating industry has no meaningful standards for responding to reports of offline harm and removing those responsible from its platforms, Columbia Journalism Investigations and ProPublica found.


Despite pledges to shield users from sexual predators, the companies have done little to abide by them. Most companies have loosely defined procedures that force employees to rely on their own judgment.


Dating app users who report an attack, like Dong, often have to badger companies to take action.


The responsibility of responding to reports of assaults falls to the workers, known as moderators in industry parlance, who handle customer complaints.


They field questions about everything from billing disputes to account issues and scammer alerts. Some dating websites have specialized moderation teams addressing sensitive claims of fraud and abuse.


But no site has a team exclusively dedicated to addressing a risk inherent in an industry built on intimacy: sexual assault. Experts say online dating companies routinely fail to anticipate abuse and implement safeguards to prevent it.


Interviews with more than 50 former and current employees — from moderators at PlentyofFish to engineers and managers at OkCupid and eHarmony — reveal a patchwork of company systems in which executives tout customer safety while pushing policies designed for issuing refunds rather than vetting the intricacies of sexual violence.


Most employees feared speaking publicly about their experiences, some because they had signed nondisclosure agreements. They describe small moderation teams juggling hundreds of complaints a day.


Some employees earn just a few dollars more than the minimum wage in their states. They receive little training on how to handle factually complex and possibly criminal complaints involving rape. Still others contend with customer service quotas that make it difficult to answer routine inquiries.


At Hinge, for instance, moderators who scour user profiles flagged by the company’s software as problematic can process up to 60 complaints per hour, according to two employees and a screenshot of the company policy.


That gives these employees an average of one minute to review, say, the use of racist and sexual terms or to assess the details of sexual assault claims and forward them up the moderation chain.


In this time, they also must decide whether to block the tagged user and to comb through that user’s messages to pull relevant information.


At OkCupid, according to two employees and a screenshot of 2020 moderation benchmarks, there’s a 15-complaint-per-hour quota for those who handle sexual assault claims and other more complex claims.


That means these moderators have four minutes on average to scrutinize user profiles and messages of both the complainant and accused, and respond to the person who filed the complaint.


For dating app moderators, many customer complaints take just seconds to address. But complicated cases, like those involving sexual assault, can put moderators behind their hourly quotas for the rest of the workday, according to multiple current and former employees at these and other dating platforms.


Such systems frequently fail victims. CJI and ProPublica heard from 224 dating app users through a questionnaire that sought input from people who had been “affected by sexual violence” after using a dating app.


Of those, 188 said they had experienced assault or harassment after using a dating app or had been matched with a sex offender or inappropriate person; 71 said they had reported a sexual assault to a platform, most of them in the past three years.


Of those users, 34 said they never heard back. (CJI was able to interview 33 of the 188 who said they experienced some form of assault or harassment and obtained police reports or documentary confirmation in 11 of those instances.)


From company silence after multiple allegations were filed, to the surprise discoveries that banned perpetrators have resurfaced on the apps, ramshackle systems have left victims of violence, the vast majority of them women, feeling traumatized a second time.


CJI shared these findings with every platform mentioned in the crowdsourced responses and asked about their moderation and safety processes. A CJI reporter contacted each dating site named in this story about specific employee claims and individual user cases.


Five platforms declined to answer questions and instead provided a general statement. Each said the safety of users is a top priority and defended the companies’ efforts to protect them. (You can read the full statements here.)


Three provided partial answers, and one didn’t respond. Only Bumble agreed to make an executive available for an interview.


Miles Norris, product chief at Bumble, has overseen that company’s moderation teams for nearly seven years.


He said Bumble, the country’s second-most popular dating platform, is already investing in new measures to protect its 42 million users from sexual assault, including a computer algorithm to detect red flags in user behaviors and a software program to verify the authenticity of user photographs.


Still, Norris recognizes the need to improve industry standards. “There are bad actors out there, and you need to be constantly improving,” he said. “Not only by your systems and processes but also the level of empathy, understanding of the users, how you respond to them.”


Match Group — the $2.4 billion corporation that owns most of the nation’s top dating apps, including Tinder, OkCupid, PlentyofFish and Hinge — declined multiple interview requests and didn’t respond to written questions.


A Match Group spokesperson told CJI in an earlier investigation in 2019 that the company lacked uniform procedures for responding to customers’ sexual assault complaints across its brands.


In December 2020, the company stated that “in recent years, we have been working toward uniform policies across all of our companies.”


Match Group executives have said customer safety is “paramount to us.” The company hired a new safety chief last fall and has invested in new technology to enhance its safety features.


In December, Match announced it’s partnering with a victims’ advocacy group to audit its sexual violence policies. An email obtained by CJI suggests that review is complete.


In its statement for this article, the company said it is “outraged that singles may experience fear, discomfort or worse when looking to meet someone special, and will always work to improve our systems to make sure everyone on our apps feels respected and safe.”


For people like Dong, who wanted the man she accused expelled from dating apps, those systems put the onus on the user to get a consequential response.


Not long after Dong had reported her rape to Tinder, in May 2019, she sent the company details about the man, including a user name, age and phone number. Days passed without a reply.


On the 12th day, she sent the follow-up email seeking an update and shared how the other dating apps had replied. Within 24 hours, a Tinder employee informed her that the company couldn’t provide additional information.


Frustrated with what she calls a “runaround,” Dong decided to take more drastic action. And so around 3 p.m. on June 18, 2019, she stood in the baking sun outside Tinder’s L.A. office for about an hour.


Her sign, a black poster board with white letters scrawled across it, caught the eye of the few pedestrians who passed by. One snapped a picture of Dong, she said.


Eventually, an employee came out to offer her water, and another approached her to collect the same information she had already provided. Only after that did she receive an email from Tinder letting her know it had banned the accused user.


Dong’s annoyance over the ordeal remains palpable. “I was like, ‘It was that easy?’” she said. “Why did I have to go down there to get you to do this?”

When Natalie Dong contacted Tinder, the process dragged on for weeks. Credit: Courtesy of Natalie Dong


In their infancy, dating platforms are typically focused on one goal: growth.


At that stage, executives “aren’t really thinking about all of the terrible things that can go wrong,” said Adelin Cai, a Twitter and Pinterest veteran who, in 2020, founded the Trust and Safety Professional Association, which is aimed at improving online moderation.


The startups treat moderation as an afterthought, Cai said, letting a crisis arise before setting a policy.


She noted that internet companies can fail to set two fundamental moderation standards: first, for monitoring user behavior through a complaint process; and second, for removing subjects who are found to have violated rules.


Employees at multiple dating apps describe a haphazard approach to content monitoring and customer service that left them ill-equipped when a user reported a sexual assault, especially during an app’s early days.


At OkCupid, for example, moderators had no corporate guidance from its launch in 2004 until 2015, interviews and records show. Today, they have at least two weeks of general training that cover billing, fraud and other sensitive issues.


Industrywide, much of the training of moderators has focused on nuts and bolts — how to access a queue or classify reports — according to insiders at these and other apps. Only a fraction of it touches on online dating sexual assault.


One Match Group platform has a manual of about 50 pages detailing top priority cases and recommended responses for fraud and abuse claims such as romance scams and online harassment. Its section on sexual assault is two pages.


That passage outlines what employees should do to reply to rape reports (“answer quickly, respond empathetically, give resources for help”) and what they shouldn’t do (“don’t send victims to police, no assumptions, choose language carefully”).


Internal company records suggest these guidelines grew out of an impromptu handbook that past employees had created on their own.


Amber Tevis lived this ad hoc experience as a moderator for the online dating company Zoosk between 2010 and 2015. At the time, the platform had more than 12 million users.


She was one of six moderators tasked with answering dozens of customer complaints each day: incidents of fraud, abuse and, occasionally, violence.


She relied on the few boilerplate customer service messages that Zoosk managers had provided until, one day in 2012, a woman called the site’s hotline to report her assault. Unsure of what to do, Tevis put the user on hold.


Her supervisor suggested the woman call the police. Tevis, who studied sociology in college and has no formal education in sexual assault response, remembers feeling as though “that person was on their own.”


The call led Zoosk moderators to draft a plan to handle rape reports: Get the reporting user’s name, email and other relevant information; ban the accused user.


“Any time there was a new situation, we would add that to the training materials,” Tevis said, explaining how the plan, like many of the app’s procedures, became part of the Zoosk employee manual.


Zoosk and its parent company, Spark Networks, didn’t respond to interview requests and written questions.


Standards for expelling accused users aren’t clear cut across the industry. Some platforms instruct moderators to ban a user after one accusation, barring contradictory evidence.


Others have had no set protocol for how or when to restrict access.


Lila Gyory worked on a four-person moderation team at Coffee Meets Bagel from 2016 to 2018, when the dating platform had several million subscribers.


She remembers flagging every complaint involving sexual assault for her manager and then discussing how to handle every accused user.


Should they ban the accused? Should they instead make a note on the account and expel the user if the person committed a second infraction? How should they handle accusations of harassment — perhaps with a three-strikes rule?


Gyory said she found the absence of a corporate policy stressful.


When she did ban someone, Gyory added that user’s profile to a spreadsheet of names, email addresses and photographs.


Yet it didn’t take long before she discovered the same restricted accounts back on the site. She recalls one accused user, angry about his expulsion, created a new Facebook profile to get around his prohibition.


She spotted him and shut him down. He set up another profile — again and again.


“It was like whack-a-mole,” Gyory said.


Coffee Meets Bagel didn’t respond to interview requests and didn’t answer most written questions. A company spokesperson said moderators follow a “zero-tolerance policy” requiring them to “swiftly ban users who exhibit bad behavior,” including sexual assault.


They build “a comprehensive profile of each banned user” so that any new accounts associated with the user “would be detected and immediately blocked from the platform.” Asked whether Coffee Meets Bagel had changed its policy since Gyory’s time, the spokesperson didn’t respond.


Over the years, as online dating companies have amended moderation policies, interviews and records suggest they haven’t adequately increased staffing at in-house moderation teams.


Employees at nearly every dating app said the team never scaled up as millions of users joined. The volume of customer complaints, they said, outpaced the staff’s ability to handle them.


At PlentyofFish, for instance, executives managed about 85 total employees in all departments over a five-year period as the company’s registered user base more than tripled from 30 million to 100 million. That meant, in later years, more than 1 million users per staffer.


OkCupid has relied on part-time and volunteer moderators to handle its complaints, four former and current employees said. One group of freelance moderators making $15 an hour while working 40 to 60 hours a week tried to unionize in 2015, according to documents obtained by CJI.


They demanded better pay and more employees to address complaints, among other things. Interviews and an internal survey show they never got this support.


Former and current OkCupid employees said the dating service’s moderators, now either in-house or outsourced, field at least 150 complaints a day. Match Group didn’t respond to written questions.


Most dating apps promise on their safety webpages to act on sexual assault complaints — or, at least, acknowledge receiving them. Many promote automated tools and in-app messaging for users to file reports. Some offer manual methods, including the rare phone line.


Before its purchase by San Vincente Acquisitions in March 2020, the dating site Grindr was alone in instructing its moderators not to send personalized responses to such complaints, according to three former employees.


A spokesperson for the new owner said it has “significantly invested in the Trust and Safety team over the last year” and hired a “head of customer experience” to review its sexual assault policies.


Asked whether this no-personalized-response practice was among the changes, the company declined to comment.


For dating app users, company assurances can ring empty. Among the 71 in the CJI/ProPublica survey pool who reported that they complained to an app about a sexual assault — a voluntary, nonscientific sampling — 37 said they did not receive a response from the app.


The numbers varied from app to app: 8 of the 10 who said they reported an assault to Bumble said they heard back; 9 of 29 got a response from Tinder; 5 of 9 from OkCupid; and 4 of 6 from Match.


Even those who received a response often expressed frustration, particularly in instances in which they received an automated reply. The latter felt dehumanizing to them.


Sue M., 53, a PlentyofFish subscriber who works in corporate communications, is now a witness in a pending criminal case filed against a POF user she said forced her to masturbate him.


In July, six months after going to police, she reported him to the dating platform along with key details like his user name. She offered a copy of her police report, noting that the man was charged with a felony, second-degree sexual assault.


An email from a POF employee arrived in Sue’s inbox the next day — and asked for the accused’s user name again. Sue sent it a second time and reiterated that she had gone to the police.


Screenshots show the employee replied with the same boilerplate language. Twice, the employee encouraged Sue to “report this incident to law enforcement,” even though Sue had twice mentioned the criminal charges.


By August, the accused’s user profile had disappeared from the app, leading Sue to believe he had been banned. She emailed PlentyofFish to confirm that, but an employee informed her that the company doesn’t “disclose confidential details about other members,” the Aug. 21, 2020, email states.


(Cai, of the Trust and Safety Professional Association, says there’s no law preventing an internet company from sharing the outcome of a complaint with the person who had filed it. Match Group apps like OkCupid and Tinder have revealed results to users who reported a rape, employee interviews and the crowdsourced responses show.)


The POF employee answered Sue’s complaint yet ignored the criminal case. “I wish they would just acknowledge it,” she said. Match Group didn’t respond to written questions about Sue’s case, the details of which CJI shared, with Sue’s consent, with the company.


Those who report rape often consider the company’s lackluster response — none at all, or a perfunctory reply — as traumatizing as the incident itself, advocates say. That’s because people who disclose an assault want to be believed and to hear an apology.


Dating platforms could earn good will from users by taking this simple step, according to Karen Baker, a victims’ advocate who heads the Pennsylvania Coalition Against Rape and has, since 2000, advised schools, professional sports leagues and businesses to help them combat sexual violence.


“They need to hear everyone’s story ... and acknowledge it,” she said. “It is a human thing to say, ‘I am sorry that happened.’”


Multiple women told CJI in crowdsourced responses that a company’s swift and thoughtful reply — one that expresses empathy — made them feel heard. But a company’s acknowledgement didn’t always satisfy those seeking a sense of justice.


That’s what motivated Tracy Lytwyn to file a report with Bumble in 2018 after a man she had met on the platform removed his condom without her consent during sex. Some advocates consider the act, called “stealthing,” a form of sexual assault, but it’s not a criminal offense.


Lytwyn, a 30-year-old Chicago resident, said she had little confidence in the police but hoped Bumble could ensure the man wouldn’t hurt other users.


When Bumble sent Lytwyn an email acknowledging her report, she assumed the problem was resolved — until, several months later, she saw the man was still on the platform. She turned to Twitter to ask for an explanation.


“Hey, Bumble I reported this guy for assaulting me,” she recalled tweeting the company in May 2018. “Why is he still on Bumble?”


An employee responded with an apology and suggested the accused might be “deleting and recreating accounts, which is why he is re-appearing,” screenshots show. The next day, the employee assured Lytwyn that Bumble had “taken action against this user.”


Nearly a year later, however, she saw him back on the app again.


Again, she took to Twitter to demand answers. “A man who raped me is currently on Bumble in Chicago,” screenshots show her messaging the company via Twitter in October 2019, “and I’ve reported him twice.”


Another Bumble employee responded and apologized. This employee said the accused managed “to circumnavigate blocks,” screenshots show.


The employee assured Lytwyn that the company had “taken additional action to permanently block him from Bumble” and gave her account free perks as “a small token of our appreciation” — five Bumble Coins worth around $10, which she used to access features available to premium members.


Three dozen users told CJI they saw their attackers back on various dating platforms. These women, like Lytwyn, were more inclined to report their assault claims to the apps than to police.


In general, they thought the apps would be more likely to take action, generally by banning abusive users, and they were hesitant to subject themselves to what they perceived to be an invasive law enforcement process.


Lytwyn said she appreciated Bumble’s personalized response but was frustrated that her accused’s profile appeared on the app after the first ban. “How is it so easy for him to get back on there?”


Bumble declined to discuss how it handled Lytwyn’s reports, even after she signed a waiver allowing the company to talk about her case. In general, Norris, its product chief, said the company policy is to enact a “silent block” on an accused user’s profile.


That means the accused can access the app, but subscribers won’t see his swipes and messages. (In theory, the accused won’t realize he is blocked and therefore won’t attempt to sign up again using a new identity.)


Unlike a ban, a block protects the person who filed the report, Norris said. To ban a user, the company grabs photos, emails and other relevant information that prevent the person from creating new Bumble accounts.


Bumble recognizes the damage that would occur if it developed a reputation for harboring rapists, a spokesperson said.


“I don’t think there is any other category of our business that we can invest into other than safety,” the spokesperson said. “We do think that can be a big driver for our business, not just the right thing to do.”

Tracy Lytwyn took to Twitter after seeing a man she had reported to Bumble reappear on the site. Credit: Courtesy of Tracy Lytwyn


Match Group, eHarmony and Spark Networks executives expressed similar sentiments nearly a decade ago, when the California attorney general’s office, then led by Kamala Harris, unveiled an agreement with the three dating websites.


The 2012 deal, billed as a joint statement of “business principles,” established baseline standards for what the attorney general’s office called “important consumer protections”: reporting and response systems to address offline abuse.


As part of the agreement, which was voluntary, the companies said they would screen for registered sex offenders.


Those provisions were already in place at some companies, two sources familiar with the effort said.


And, as ProPublica and CJI found in an earlier article, Match Group applied them to its flagship, Match.com, but not to the free dating sites — including Tinder, OkCupid and PlentyofFish — that it has since acquired.


The companies largely maintained their status quo and the attorney general’s office scored a political victory. The attorney general’s office, which didn’t respond to interview requests and declined to answer written questions, has done little to ensure the dating apps keep their promises.


Now some federal lawmakers are eyeing these same regulatory gaps.


In 2020, after CJI and ProPublica published an article in December 2019 about the failure of dating apps to remove registered sex offenders from their sites, 11 Democratic representatives of the House Energy and Commerce Committee demanded that Match Group disclose its efforts to “respond to reports of sexual violence,” according to a letter dated Feb. 20, 2020.


The committee sought information on the company’s reporting and response protocols. According to congressional sources, Match Group has offered limited answers that suggest it lacks a standardized system.


In response to the crowdsourced findings, Rep. Jan Schakowsky, D-Ill., who has spearheaded the congressional investigation, said in a statement that it is “unconscionable” that online dating companies don’t always respond to customers’ claims of sexual assault, and that accused users can get back on platforms.


“They must answer for their continued failures,” she said.


Schakowsky introduced a bill on May 7 that would require dating platforms to enforce their rules designed to prevent fraud and abuse.


Under the proposal, violations of this requirement would be categorized as unfair or deceptive acts and subject to enforcement by the Federal Trade Commission.


Match Group has reviewed the bill, according to a congressional aide, and has argued that online dating companies shouldn’t be held liable, citing Section 230 of the 1996 federal Communications Decency Act.


“Section 230’s broad immunity ensures that regulation does not hamstring technological innovation or suppress online speech,” Match Group wrote in a friend-of-the-court brief it and other internet interests filed in 2018, “while at the same time encouraging responsible online providers to take steps to prevent users from misusing their services.”


Section 230 was created to protect websites from being held liable for their users’ speech unless it was criminal.


But dating platforms, including Match Group, have successfully invoked Section 230 to deflect lawsuits claiming negligence for incidents involving users harmed by other users, including victims of sexual assault.


Often, judges dismiss cases before an aggrieved party can even obtain information about the company’s response to the assault. One result of this obstacle is that very few civil suits have been filed against online dating companies seeking to hold them liable for harm suffered by users.


Now, for the first time in two decades, there is serious discussion in Congress — motivated by widespread frustration with social media sites such as Facebook — about overhauling Section 230.


Carrie Goldberg, a New York City victims’ rights lawyer who specializes in cyberabuse, has reviewed the Schakowsky bill. She believes it falls short and argues the only solution is to pass an exception to the provision that would carve out offline harm from its blanket immunity.


“We have to reform Section 230,” said Goldberg, who believes there needs to be a congressional hearing of dating app CEOs, “so that victims can hold liable companies whose negligence is facilitating rapes.”


In the absence of meaningful regulation, some internet companies facing scrutiny over sexual assault policies have worked to reform.


Ride-sharing giant Uber, for example, issued an 84-page report in 2019 disclosing the number of sexual assault claims made by its users and drivers: nearly 6,000 over two years.


Uber devised a first-of-its-kind methodology to track incidents of sexual violence and audit its prevention measures.


It then invested in background checks for its drivers, created a specialized customer support team for handling sexual and physical assault complaints and drafted new rules to act on them.


In March, Uber joined its competitor Lyft in launching an effort to share the profiles of drivers who are deactivated from the platforms because of sexual and physical violence. Victims’ advocates have praised the company for its transparency.


Transparency is a critical first step to curb the problem according to Baker, of the Pennsylvania anti-rape coalition. (She also advised Uber on its effort.) Baker believes online dating companies should release similar figures.


That would allow users to choose the safest platform and the public to hold the industry accountable.


“These problems are happening at every single company,” Baker said. “The ones that are telling you about it are the good ones.”


No dating app has shared such statistics — not even when asked to do so by Congress.


When the House commerce committee asked Match Group to disclose how many sexual assault complaints its five biggest platforms had logged in 2019, according to a congressional aide, the company did not provide an answer.


Instead, the aide told CJI, Match Group had revealed a different number: It said it had referred 200 such claims to police that year.


Match Group does appear to be following Uber’s lead in other ways. Last fall, the company hired Tracey Breeden, Uber’s safety chief, to head its customer safety initiatives. Within months, she announced that Match Group was auditing its sexual assault policies.


The company also has invested in nascent safety technology for its apps — first, a panic button that users can press to call 911; and second, a background-checking service that users can access for a fee. (The latter is scheduled to begin its rollout at the end of 2021, first with Tinder, then with other apps.)


Match Group declined to make Breeden available for an interview. In response to questions, the company sent a months’ old press release about her hiring and its audit.


“We are committed to creating actionable solutions by working collaboratively with experts to innovate on meaningful, industry-led safety approaches,” Breeden said in the Dec. 7, 2020, release.


Earlier this month, in a letter to shareholders, Match asserted that it is “setting ambitious standards focused on keeping people emotionally well and physically safe.


This year, we will spend more than $100 million on product, technology and moderation efforts related to trust and safety.”


That may offer little consolation to dating app users like Dong, who, despite getting the outcome she wanted from Tinder, has vowed never to use the service again.


The year before her protest, Dong had struggled with the fallout of her online dating experience.


She remembers waking up each day, vacillating between despair and rage. She dropped out of school and was unemployed. Eventually she went to therapy, where she decided to channel her emotions into action.


Records show she went to the police but, according to Dong, prosecutors declined to pursue a case because of the she said-he said nature of the incident. Dong turned to the dating platforms as an alternative.


When an app doesn’t respond to a rape report, she said, “It’s just another example of the world telling you that they don’t care.”


Today, Dong said she still cherishes the reply from one platform: Bumble. She recalls feeling overwhelmed by the company’s “immediate support,” a three-paragraph email sent by a Bumble employee on May 9, 2019.


One sentence resonated. “We’re incredibly saddened reading the report you so bravely sent to us,” Bumble’s response read. “So this is just awful to hear.”


Dong, who had read Bumble’s response to a CJI reporter over FaceTime, touched her hand to her heart. “I appreciated that they took the time to show human compassion,” she said later. As for Tinder, the poster board that she hauled to its headquarters is now shoved under her bed.


Brian Edwards, Elizabeth Naismith Picciani and Keith Cousins reported this article as fellows for Columbia Journalism Investigations, an investigative reporting unit at the Columbia Journalism School. Sarah Spicer contributed as a CJI research assistant.


Funding for CJI is provided by the school’s Investigative Reporting Resource and the Stabile Center for Investigative Journalism.


Photo by Good Faces Agency on Unsplash