As the Israel-Hamas war flooded social media with violent content, false information and a seemingly limitless swell of opinions, lawmakers and users have accused platforms like TikTok and Facebook of promoting biased posts.
Tech giants have denied the charges. TikTok, accused of elevating pro-Palestinian content, blamed “unsound analysis” of hashtag data. Some Instagram and Facebook users circulated a petition accusing the platforms’ parent company, Meta, of censoring pro-Palestinian posts, which Meta attributed to a technical bug.
Antisemitic content swarmed onto X, the platform formerly known as Twitter and run by the billionaire Elon Musk. X’s chief executive, Linda Yaccarino, said in a post on Thursday about antisemitism that “there’s no place for it anywhere in the world.”
Where the truth lies, however, is hard to glean, according to academic researchers and advocacy groups. They said the debates over content related to the Israel-Hamas war have highlighted the roadblocks complicating independent analysis of what appears on the major online services. Instead of being able to conduct methodical studies of online discourse, they must try to grasp its scope and effects using inefficient and incomplete methods.
The murkiness enables people to make dubious claims about what is dominant or popular online and allows the platforms to retort with similarly flimsy or warped evidence, limiting accountability on all sides, the researchers said.
“We’re in desperate need of vigorous, informed research on what the actual impact of platforms are on society, and we can’t do that if we don’t have access to data,” said Megan A. Brown, a doctoral student at the University of Michigan who researches the online information ecosystem.
Inflammatory content — and what to do about it — remained top of mind at social media platforms this week. More than a dozen Jewish TikTok creators and celebrities, including the actors Sacha Baron Cohen and Debra Messing, confronted TikTok executives and employees in a private meeting about the platform’s handling of antisemitism and harassment. After Mr. Musk endorsed an antisemitic post on X, internal messages showed that IBM cut off $1 million in planned advertising spending.
Researchers also tried to understand a surge of interest in a decades-old letter from Osama bin Laden. The so-called “Letter to America” criticized the United States and its support of Israel, repeating antisemitic tropes and condemning the destruction of Palestinian homes.
After reviewing public social media posts from Tuesday to Thursday, researchers from the Institute for Strategic Dialogue concluded that references to the letter jumped more than 1,800 percent on X. They found 41 “Letter to America” videos with more than 6.9 million views on TikTok.
The researchers, Isabelle Frances-Wright and Moustafa Ayad, said in an interview that they wanted to do much more sophisticated analysis. Instead, they had to run searches by hand using basic terms, unable to analyze the letter’s spread by region or language.
“Much of this content, particularly video content, is not tagged with the type of text we can manually search, so anything we’re finding is really just the tip of the iceberg,” Ms. Frances-Wright said.
Jamie Favazza, a spokeswoman for TikTok, said that the company supported independent research, and that it allowed over 130 academic research teams access to analyze the site. “We’re working diligently to expand eligibility to civil society researchers in the U.S. soon,” she said.
Meta declined to comment. X did not respond to a request for comment.
Background data about engagement, volume and other metrics is usually retrieved through a platform’s application programming interface, or A.P.I. The major tech companies have long offered some degree of access, but researchers said that now seems to be shrinking.
This year, as Mr. Musk sought to find new ways to monetize X, the company started charging thousands of dollars for monthly access to its A.P.I., effectively shutting out many researchers. Meta’s support for the data analysis tool CrowdTangle has dwindled amid internal concerns about damaging the company’s reputation.
These days, researchers said, the data they can study is often dictated by what platforms want to release — “research by permission,” some explained — and is often unreliable and delayed long past the point of relevance.
“With data, you can always paint the picture that you want when you are the only one who has access to that data,” said Sukrit Venkatagiri, an assistant computer science professor and misinformation expert at Swarthmore College. “If we have no lens into what is happening in these spaces that have billions of users, that is a little scary.”
TikTok has been at the center of the recent firestorm, partly because of its ownership by the Chinese company ByteDance, with some critics claiming that it is pushing pro-Palestinian content to align with the government in Beijing. TikTok has been accused of amplifying pro-Palestinian videos through its powerful algorithmic feed and of failing to address antisemitic content.
TikTok has issued multiple statements pushing back on accusations of bias, pointing to polls showing that young Americans supported the Palestinian cause before the company existed. The company has also tried to poke holes in data about popular hashtags that critics said revealed the pro-Palestinian bent on the service.
This week, TikTok said that the hashtag #standwithIsrael had fewer videos than #FreePalestine, but “68 percent more views per video in the U.S., which means more people are seeing the content.” It also pointed to public data on Instagram and Facebook, which showed millions of #FreePalestine posts and fewer than 300,000 #standwithisrael posts.
Researchers like Mr. Venkatagiri said such data lacked context: “You have to look at the whole information environment, at every single variant of that hashtag, whether it’s being used in support or against a particular topic, what other text is being used, if it’s just a tweet or in a video, comments, links,” he said. “There’s so much more you have to think about beyond just looking at this one-off analysis.”
Tech giants have said that they are trying to balance the interests of researchers with users’ privacy rights, while also ensuring the data is not used for commercial purposes.
“It’s not that there shouldn’t be guardrails placed around researchers, but when those guardrails are placed by the companies themselves, it introduces challenges to what kind of research gets done and by whom,” Ms. Brown said.
The Anti-Defamation League has talked with TikTok for more than a year about advocacy groups’ getting access to its A.P.I., but the war has highlighted its urgency, said Yaël Eisenstat, a vice president at the Jewish advocacy group.
It is “prohibitively difficult” to independently determine whether TikTok is promoting content that favors Israel or Palestinians or analyze its management of antisemitic content, she said.
As of this year, academic researchers from nonprofit universities in the United States and Europe can apply for free access to TikTok’s research A.P.I. with a defined research proposal, subject to a two- to three-week approval process. (Approval processes are common across the platforms.) The company said that it has already received proposals to study content related to the Israel-Hamas war.
The Digital Services Act, a new European Union law, now requires large online platforms to provide real-time data to researchers studying the risks of social media, and has pushed companies like Meta and YouTube to offer new tools for researchers. Similar requirements are being proposed in the Platform Accountability and Transparency Act in the United States.
Meta, which has faced criticism over flawed data and tussled in 2021 with a New York University research group, updated researchers’ ability to work with its content archive (the company noted this summer that queries of its library take place in “controlled-access environments” that do not allow researchers to download data). YouTube, after years of pressure from groups like the Mozilla Foundation, is also opening up to independent research.
Susan Benesch, who runs Dangerous Speech Project, a research group, hopes to study content emerging from the Gaza conflict but, for now, is mostly relying on anecdotal evidence from acquaintances working in trust and safety at social media platforms.
She knows that the companies, hoping to avoid public criticism, don’t have an incentive to release data to researchers. Transparency could, however, be “a huge opportunity” for society, she said.
“There’s a gold mine with all of these different veins of invaluable information hidden in there that no researchers in the course of human history could have even dreamed about until now,” Ms. Benesch said. “The platforms still won’t give it to us, but now at least it’s there.”