Menu Close

AI takes center stage at OSAC Week as security teams weigh pace of adoption

Over four days of OSAC Week sessions, the staying power of AI was never in question. But there was a vibrant debate about how quickly the security industry should adopt it and the boundaries for risk intelligence analysts. (All sessions were under Chatham House rule, so I’m not attributing speakers.)

On one side, the pressure to innovate is relentless. Chief security officers explained that AI adoption is being mandated from the top and lagging behind is not an option. Vendors are rapidly deploying new AI features to help teams contend with the avalanche of data, while security leaders from AI companies warned of the “enshittification” of the Internet and the “arms race” with bad actors. As if to underline the pace of change, Google rolled out Gemini 3 right in the middle of OSAC Week.

On the other side, analysts raised red flags about moving too quickly and over-relying on AI. Studies warn that “cognitive offloading” erodes critical thinking skills. The “human in the loop” risks becoming a rubber stamp, losing the ability to identify errors or hallucinations. AI-heavy writing lacks human storytelling and soul, and models can become a “force multiplier for confirmation bias.” If you’re working too closely with AI, you lose your connection with co-workers.

The consensus wasn’t about choosing between human and machine, but redefining the relationship. While AI provides rigor and structure, humans excel at synthesis and connection. The challenge is finding a balance that preserves critical thinking and trust while keeping pace with a faster world.

As several speakers suggested, a good rule of thumb is preserving the “last mile” for human work. (For example, while I used AI to get organized and suggest ideas, I wrote this blog post myself.) Taking occasional AI breaks can be helpful, too, as well as developing benchmarks to ensure quality doesn’t slip. Above all, prioritize transparency and the human relationships that form the foundations of trust and “keep us tethered to what’s real.”

Wherever you fall in the continuum of human and AI work, I’m a believer in relentless experimentation and practice. Carve out some time to try different models, generate images, create videos and test ideas without the pressure to produce an end-product. Get to know what AI can do – and can’t do – so you understand the opportunities and risks. While the economics of AI remain in flux, the technology is not slowing down anytime soon.

A longtime intelligence leader speaking at the I3 Summit at OSAC Week said it best, “I hate change as much as the next person, but I just hate irrelevance more.”


We’re planning to hold new AI training classes for security professionals in early 2026. If you’re interested, sign up here and we’ll let you know when they’re scheduled.

Cory Bergman is the co-founder and chief product officer of Factal, a risk intelligence platform that blends advanced AI with experienced journalists.

What is Factal?

Trusted by many of the world’s largest companies and nearly 300 humanitarian NGOs, Factal is a risk intelligence and collaboration platform that brings clarity to an increasingly noisy and uncertain world.

Powered by a hybrid of advanced AI and experienced journalists, Factal detects early signals, verifies critical details and assesses the potential impact at the speed of social media. From physical incidents and brand mentions to geopolitical developments, Factal offers the most trusted, real-time risk intelligence on the market.

Factal is also home to the largest security and safety collaboration network in the private sector. Members securely share information with other members in proximity to the same incident, both on Factal.com and the Factal app.

Learn more at Factal.com, and we’d love to hear from you.