Skip to content

2026 Security Predictions

Cybersecurity in 2026 won’t be limited by technology. It will be defined by speed, identity and trust. To understand what that really means in practice, we asked three of Ekco’s top security experts one simple question: What will fundamentally change by 2026?

Each work in a different area of security and approached the question from a different angle. Together, their answers paint a clear picture of the challenges organisations need to prepare for now.

Prediction 1: AI will make cyberattacks less predictable

Dominic Kearne – CTI Analyst, Ekco

By 2026, the use of AI by threat actors will increase exponentially. This will enable increasingly sophisticated and bespoke cyberattacks, including adaptive malware that changes behaviour as it operates.

As a result, techniques used by threat actors will evolve. Attack pathways will become less predictable, and traditional Indicators of Compromise may become far less reliable. Crucially, AI will also lower the barrier to entry – less technical threat actors will be able to carry out attacks that previously required advanced skills.

That said, threat actors won’t have AI all their own way. Defenders also have access to AI-driven capabilities. The challenge will be how effectively organisations use them.

Why this matters for boards and IT leaders

More attacks, from a broader range of actors, using less predictable techniques, increases the likelihood of successful incidents. When attacks succeed, the consequences are well understood: reputational damage, financial loss, operational shutdowns and, in some cases, long-term or irreparable harm to the organisation.

What organisations should start doing now

Organisations should assume that an attack will happen and work backwards from that assumption. The focus should be on how to maintain the integrity of systems and data when, not if, defences are bypassed.

Prediction 2: AI agents will become high-risk identities

Keith Batterham – Practice Lead, Ekco

By 2026, we will see the rise of what can be described as a “synthetic workforce”. AI agents will transition from passive tools into autonomous entities that possess their own identities.

This will lead to a massive increase in non-human identities (NHIs). Unlike traditional service accounts, these identities will be ephemeral, decision-making agents that can be created and removed rapidly. In many organisations, they may significantly outnumber human employees, yet today they operate with a fraction of the oversight.

In effect, these are digital employees that can be spawned at will.

Why this matters for boards and IT leaders

This introduces the risk of high-velocity insider threats. If an autonomous AI agent is given excessive privileges (or is compromised) it can exfiltrate data or disrupt infrastructure at machine speed, far faster than a human security team can respond.

From a governance and compliance perspective, this creates serious challenges. If an AI agent causes a regulatory breach, unclear identity attribution makes accountability, liability and compliance with frameworks such as GDPR or DORA extremely difficult to enforce.

What organisations should start doing now

Organisations need to treat AI agents as employees, not software. Every non human identity should have a verifiable identity, a clearly defined owner, strictly enforced least privilege access and a mandatory “kill switch”, just as you would expect when offboarding a human employee.

Prediction 3: Impersonation will scale beyond human detection

Dr. Jonny Milliken – Senior Security Manager, Ekco

As AI capability continues to mature, threat actors will become faster and more effective at targeting organisations and their people.

This is most likely to be seen through increased video and audio impersonation attacks, as well as hyper personalised phishing campaigns. Over time, the distinction between traditional phishing and spear phishing is likely to disappear altogether.

In response, AI enabled defensive capabilities will become a standard part of the security toolkit. However, maintaining a human in the loop will remain important, both practically and from a regulatory perspective.

Why this matters for boards and IT leaders

Executive impersonation and fraudulent transactions will represent a rapidly increasing risk. Organisations will no longer be able to rely solely on user reporting or reactive controls to detect and stop campaigns.

Proactive phishing detection, threat hunting and insider risk programmes will become essential capabilities, distinct from traditional SOC operations.

At the same time, due diligence around the adoption or development of AI solutions will be critical. Boards and leadership teams must fully understand their obligations under the EU AI Act before introducing new AI capabilities into the business.

More broadly, the level of AI literacy across the workforce will be a stronger predictor of success than simply deploying AI tools.

What organisations should start doing now

Organisations should invest in AI literacy across the business, establish clear internal AI governance and strengthen identity verification processes, particularly in finance and helpdesk functions where impersonation attacks are most effective.

One shared message

Most organisations aren’t unprotected. They’re unprepared for how quickly things now unfold.

Waiting is the biggest risk.

Organisations that prepare now, by addressing speed, identity and trust, will be far better positioned to manage the realities of cybersecurity in 2026 and beyond.

Ekco works with boards and IT leaders to test readiness before it’s tested for them.

Get in touch

Question?
Our specialists have the answer