On the evening of Feb. 6, Central Washington University police in Ellensburg, Wash., raced to a report of shots fired on campus.
The university published a campus-wide alert at 5:35 pm: “There has been a report of an active shooter in the area of Lind Hall,” it read. “Stay out of the area. Check here for updates.”
But the updates didn’t come. What happened next is a textbook information vacuum, a breeding ground of panic-fed misinformation that thrives on a lack of facts.
For nearly an hour and a half after the initial alert, students heard nothing from either police or the school. In the absence of official information, they turned to Twitter and the police scanner.
Moments after the campus alert hit students’ phones, they began posting on social media about an active shooter. A massive police response on campus further legitimized the alert. “It’s 100% real,” exclaimed one tweet. Others posted video of the police and photos of students in hiding.
Local media picked up the story about 15 minutes later, sticking to the university’s “active shooter” description but couching it as a report. A Seattle radio station interviewed a reporter from the school newspaper who said there were as many as three shooters.
When pressed for her source, she said she was listening to the police scanner.
A few minutes later, another student posted on Twitter, “One confirmed dead. 2 being airlifted. There are 4 shooters. Still roaming all over campus.” Amid the retweets, she was asked for her source. “Police scanner radio,” she replied.
Everyone was listening to the scanner because there was no other source of information. Available online or in an app, a live stream of the scanner also was shared by a random news aggregator on Twitter, making it even easier to find. Desperate for details, those who were most at risk devoured every scrap of information available.
As Twitter pulsed with scanner interpretations and the story spread nationwide, other law enforcement agencies offered to help. The ATF tweeted that special agents were “responding to reports of active shooter” at the university. Still, no new information was released.
“We had seven to 10 agencies ready to support,” a university spokesperson said later.
Working off of Twitter and media reports, other agencies were offering their help, which fueled more legitimacy for those listening to the scanner and watching Twitter. It sounded like it was bad.
But there was no shooting.
In the Factal newsroom, we were also listening to the scanner. We had already published an update sourcing the university’s alert moments after it was published, detected via our technology, but our editors carefully avoided “active shooter” language.
(The definition of an active shooter is someone who is “actively engaged in killing or attempting to kill people in a populated area.” This is often conflated with the sound of shots fired or just a person carrying something that looks like a gun.)
As we listened to the same scanner the students were monitoring, we didn’t hear multiple shooters (rarely true) or any evidence of victims. We did hear the police, as a precaution, call in two medic airlifts. We also heard dispatchers informing officers of secondary 911 calls, which are a common occurrence, originating from frightened people who thought they heard gunshots or saw a suspect.
At 39 minutes into the incident, a police commander declared on the scanner that they had found no victims or a shooter so far.
The transmission didn’t get much pick up by scanner listeners, social media aggregators and the media (it’s standard practice for news organizations not to report directly from the scanner.) It also didn’t get relayed to students.
But it did provide directional insight that matched what Factal editors were seeing on social media: none of the student reports showed any evidence of a shooting, even second hand. There wasn’t even a vague suspect description.
Based on our experience covering hundreds of active shootings and false alarms in our prior years at Breaking News and now at Factal, it looked increasingly like a shooting was unlikely. So we posted an editor’s note for our members:
We haven’t received any official confirmation or seen any evidence of a shooting on the campus of Central Washington University in Ellensburg, Wash., but police continue to sweep several buildings and evacuate students as a precaution.
Lacking official facts, we provided guidance to Factal members that this was shaping up to be a false alarm.
Still, outside the initial alert, the official police, sheriff and fire social media accounts were silent. Ellensburg is a small town in a small county, but similar scenarios play out daily around the world. It’s not just small town America that doesn’t have the resources or is reluctant to update Twitter or Facebook, but global governments that aren’t accustomed to sharing what’s happening – or they share their version of what’s happening.
Or just not at all.
Information vacuums are not only breeding grounds for misinformation, but disinformation as well. In the void, people jump to conclusions and bad actors spin up bad information, complicating efforts to know what’s really happening. When governments cut off the internet, the situation gets worse.
An hour and 22 minutes after the campus alert, Central Washington University finally provided an update: “We have no reports of any injuries at this time,” the university tweeted. An hour and 53 minutes into the incident, they gave the “all clear.” Students finally felt some semblance of safety, and parents breathed a sigh of relief. The university president confirmed it was a false alarm.
While the campus said it responded by the book with student safety as its top priority, students were understandably traumatized by the incident and upset at the lack of updates – an information vacuum that draws in any detail, real or imagined.
(Cory Bergman is the co-founder of Factal and former head of Breaking News, a real-time verification company that was owned by NBC News. Photo above is a screenshot of video taken by @Jake_M_Green on Twitter.)