How media coverage and social media amplified false information following a terrorist attack.

In the first 72 hours after the Bondi Beach terrorist attack, misinformation spread rapidly across social media and news platforms. Early false reports, speculation and errors by artificial intelligence systems shaped public perception and contributed to confusion and harm.

We examine the stages through which misinformation develops after a major incident and offers guidance on how audiences can critically navigate breaking news.

In the immediate aftermath of a terrorist attack, accurate information is critical. Whether people are present at the scene or following events remotely, a clear understanding of what has occurred supports both public safety and informed decision-making. However, the urgency for reporters to give a timely response, mixed with citizens wanting to receive answers quickly often competes with the need to verify facts, creating conditions in which misinformation can take hold. 

A distinction should be drawn between misinformation and disinformation. Misinformation refers to the inadvertent spread of inaccurate information without intent to deceive, while disinformation involves the deliberate creation and distribution of false content intended to mislead. Both can cause significant harm, particularly in the hours immediately following a crisis. 

The 2025 Bondi Beach attack provides a recent and instructive example. False claims about the incident not only affected those directly involved but also contributed to fear and confusion in the broader community, demonstrating how quickly inaccurate narratives can take hold during a developing crisis. 

The Bondi Beach Attack

On December 14, 2025, two men carried out an attack during a local Hanukkah celebration at Bondi Beach. The perpetrators, a father and son, killed 15 people and injured 42 others. The younger of the two, Naveed Akram, had connections to the Islamic State but had been removed from a watch list maintained by the Australian Security Intelligence Organisation (ASIO) after several years without detected suspicious activity. 

Witnesses reported hearing gunshots from a bridge overlooking Archer Park. The park is situated right off the beach with a grass area in front of the beach and a parking lot behind. Those attending the the event sought shelter or remained in place. The attack lasted approximately eight minutes and contained two attempts to halt the situation. During the incident, two civilians, Reuven Morrison and Ahmed Al Ahmed, attempted to intervene. Both perpetrators were subsequently shot by police; Naveed Akram survived, while Sajid Akram died at the scene. 

Misinformation in the Aftermath

In the hours following the attack, significant misinformation circulated across social media and online platforms. Early reports incorrectly identified the perpetrators and, in some cases, falsely linked unrelated individuals to the attack, leading to harassment and threats directed at innocent people. 

bystander, who did not want to disclose his name to the media, intervened during the attack but was also misidentified online. Police along with other onlookers perceived him as a third attacker rather than a helpful citizen. Separately, conspiracy theories emerged suggesting the attack was staged as an “false flag” operation, despite no credible evidence to support such claims.

Artificial intelligence systems also contributed to the confusion. At least one AI chatbot incorrectly identified Ahmed Al Ahmed, who had attempted to confront the attacker, as one of the perpetrators. These examples illustrate how misinformation can rapidly shape public perception before verified information becomes available.

Social media platforms, including X (formerly Twitter), Facebook and TikTok, played a significant role in accelerating the spread of false information. Eyewitness footage and unverified claims circulated widely before journalists or authorities could confirm details.

Research on crisis communication indicates that emotionally charged or alarming information tends to spread more rapidly online, even when inaccurate. In the case of the Bondi attack, rumours about additional perpetrators, inflated casualty numbers, and incorrect claims about the attackers’ identities were widely shared in the early hours after the incident. Algorithmic recommendation systems, which prioritise engagement over accuracy, further amplified this content, contributing to widespread confusion and fear.

Stages of Misinformation

Researchers in crisis communication and digital media have identified recurring stages through which information develops after major events such as terrorist attacks. Understanding these stages helps explain why misinformation is particularly prevalent in the first 72 hours following an incident. 

Stage 1: Breaking News (0–6 Hours)

The first stage begins immediately after an attack, when information is extremely limited and news organisations rely heavily on eyewitness accounts, emergency communications, and early social media posts. During this phase, both police and media outlets misidentified a local resident as a possible third perpetrator. Social media also spread reports of a potential attacker in Dover Heights, a claim authorities later confirmed was a rumour. 

Research on misinformation diffusion indicates that novel or emotionally charged claims are more likely to be shared online, allowing early rumours to circulate widely before official information is available. Common inaccuracies at this stage include:

  • The number of perpetrators
  • The precise location of the incident
  • Casualty figures
  • The identity or motivations of those responsible

Stage 2: Narrative Formation (6–24 Hours)

During the second stage, journalists begin constructing a more coherent account of events, updating their reporting as new information emerges from police, witnesses, and investigators. However, misinformation that took hold during the first stage frequently continues circulating even after corrections have been issued. 

In the Bondi case, another man from Sydney who shared the name, Naveed Akram, with one of the perpetrators, faced serious threats after his driver’s licence details were leaked online, incorrectly linking him to the attack. Research into information diffusion has found that false stories are more likely to be shared than accurate ones, meaning inaccurate claims can persist long after corrections become available. 

Media coverage at this stage typically focuses on:

  • Identifying suspects or perpetrators
  • Establishing the timeline of the attack Reporting initial casualty figures 
  • Sharing eyewitness testimony 

Because information remains incomplete, journalists may inadvertently repeat incorrect details from earlier reports. 

Stage 3: Amplification and Interpretation (24–48 Hours)

In the third stage, discussion expands beyond the basic facts of the event to include analysis and commentary. Political leaders, commentators, and social media users begin debating the causes of the attack and its broader implications. During this phase, misinformation can become embedded within ideological narratives, with false claims used to support arguments about immigration, religion, national security, or identity. 

In the Bondi case, speculation about a “false flag” operation reached its peak during this period, with more than 17 million media impressions recorded linking the attack to broader geopolitical narratives. Research indicates that false narratives tend to spread more widely because they often contain novel or emotionally provocative content that increases engagement on social media platforms. This amplification can give rise to: 

  • Conspiracy theories about the attack 
  • Misidentification of suspects or victims 
  • Exaggerated or incorrect casualty reports 
  • Politically motivated misinformation

Stage 4: Verification and Correction (48–72 Hours)

In the final stage of the early reporting cycle, official investigations yield verified details about the attack. Law enforcement agencies release confirmed information regarding the perpetrators, the timeline of events and the motives involved. Journalists update their coverage, accordingly, gradually replacing early speculation with verified facts. 

However, research indicates that initial misinformation often continues circulating well after corrections have been issued. This is sometimes referred to as the “continued influence effect”, the tendency for individuals to retain incorrect beliefs even after they have been explicitly disproven. The persistence of false narratives surrounding the Bondi attack illustrates this pattern and underscores the long-term harms that early misinformation can cause. 

97 per cent of adults in Australia have insufficient ability to verify information online when attempting to identifying misinformation.

Navigating Breaking News: Guidance for Audiences

The Bondi Beach attack did not end when the gunfire stopped.

For the Sydney man who shared a name with one of the perpetrators, it continued in the form of leaked personal details, death threats, and a driver’s licence circulated to strangers, leaving him unable to safely leave his home. For Ahmed Al Ahmed, who risked his life to disarm an attacker, it continued when X’s built-in AI tool Grok incorrectly identified him as one of the perpetrators when users asked about the viral footage. For the broader public, it continued through misinformation circulating across X, Tumblr and Telegram, false reports of additional attackers, inflated casualty figures and conspiracy theories that repositioned victims as suspects and framed the attack as staged. 

This is what misinformation costs. Not abstract credibility, but concrete, measurable harm to real people in real time.

The problem is not that audiences are careless. It is that the systems through which breaking news now travels are not built for accuracy, they are built for engagement.

A landmark MIT study found that falsehood diffuses significantly farther, faster, deeper and more broadly than the truth across all categories of information, and in many cases by an order of magnitude. Crucially, this is driven by humans rather than bots, people are more likely to share novel information, and false news tends to be more novel than true news, inspiring greater surprise, fear and disgust in those who encounter it. Algorithmic platforms, optimised to maximise engagement rather than accuracy, amplify this tendency further. 

Nor does the problem resolve itself once corrections emerge. Misinformation exerts a lingering influence on people’s reasoning even after it has been corrected, an effect researchers call the continued influence effect. Although corrections do reduce belief in misinformation to some degree, they rarely eliminate it entirely.

A false story shared in hour one does not become harmless simply because it was disproven in hour 48. 

What audiences can do is modest but not nothing. Wait for official statements before sharing. Cross-reference claims across established newsrooms. Treat the most emotionally compelling content, the clip that confirms what one has already suspected, the name that is already trending, with the most scepticism, not the least. The first 72 hours after an attack are when the information environment is most dangerous. They are also when the instinct to share, to react, to make sense of something frightening is at its most powerful. That tension, between the human need to respond and the need to wait cannot be resolved by any platform or policy. It can only be resolved, each time, by individual judgment.

Further reading:

AUTHOR

voh-articles-author-box-kaylee

Kaylee Janysek

Research Assistant, IEP
FULL BIO

Vision of Humanity

Vision of Humanity is brought to you by the Institute for Economics and Peace (IEP), by staff in our global offices in Sydney, New York, The Hague, Harare and Mexico. Alongside maps and global indices, we present fresh perspectives on current affairs reflecting our editorial philosophy.