WebAssembly, Agentic AI, data classification, AI gateways and small language models
More than ever, enterprises are grappling with a hybrid IT estate spread across public cloud, on-premises, and edge computing. This poses significant challenges in terms of standardizing security, delivery, and operations across disparate environments.
Against this ever-changing backdrop, what are they key trends to look out for in 2025? An elite team of F5 experts weighed in with their expert opinions.
2025 Technology #1: WebAssembly
WebAssembly (Wasm) offers a path to portability across the hybrid multicloud estate, delivering the ability to deploy and run applications anywhere a Wasm runtime can operate.
But Wasm is more than just a manifestation of the promise for cross-portability of code. It offers performance and security-related benefits while opening new possibilities for enriching the functionality of browser-based applications.
In 2025, WebAssembly in the browser is not expected to undergo drastic changes. The main developments are happening outside of the browser with the release of WASI (WebAssembly System Interface) Preview 3.
This update introduces async and streams, solving a major issue with streaming data in various contexts, such as proxies. WASI Preview 3 provides efficient methods for handling data movement in and out of Wasm modules and enables fine-tuned control over data handling.
Racing to meet that challenge is generative AI and the increasingly real future that is AIOps. This fantastical view of operations—changes and policies driven by AI-based analysis informed by full-stack observability—is closer to reality everyday thanks to the incredible evolutionary speed of generative AI.
Oscar Spencer, Principal Engineer, F5
2025 Technology #2: Agentic AI
Autonomous coding agents are poised to revolutionize software development by automating key tasks such as code generation, testing, and optimization. These agents will significantly streamline the development process, reducing manual effort and speeding up project timelines.
Meanwhile, the emergence of Large Multimodal Agents (LMAs) will extend AI capabilities beyond text-based search to more complex interactions.
As AI agents reshape the internet, we will see the development of agent-specific browsing infrastructure, designed to facilitate secure and efficient interactions with websites. This could disrupt industries like e-commerce by automating complex web tasks, leading to more personalized and interactive online experiences.
However, as these agents become more integrated into daily life, new security protocols and regulations will be essential to manage concerns related to AI authentication, data privacy, and potential misuse.
Laurent Quérel, F5 Distinguished Engineer
2025 Technology #3: Data classification
Roughly 80% of enterprise data is unstructured. Looking ahead, generative AI models will become the preferred method for detecting and classifying unstructured enterprise data, offering accuracy rates above 95%. These models will become more efficient over time, requiring less computational power and enabling faster inference times.
Solutions like Data Security Posture Management (DSPM), Data Loss Prevention (DLP), and Data Access Governance will increasingly rely on sensitive data detection and classification as a foundation for delivering a range of security services.
As network and data delivery services converge, platform consolidation will drive vendors to enhance their offerings, aiming to capture market share by providing comprehensive, cost-effective, and easy-to-use platforms that meet evolving enterprise needs.
James Hendergart, Sr. Dir. Technology Research, F5
2025 Technology #4: AI gateways
AI gateways are emerging as the natural evolution of API gateways, specifically tailored to address the needs of AI applications. Similar to how Cloud Access Security Brokers (CASBs) specialize in securing enterprise SaaS apps, AI gateways will focus on unique challenges like hallucinations, bias, and jailbreaking, which often result in undesired data disclosures.
As AI applications gain more autonomy, gateways will also need to provide robust visibility, governance, and supply chain security, ensuring the integrity of the training datasets and third-party models, which are now potential attack vectors.
Additionally, as AI apps grow, issues like distributed denial-of-service (DDoS) attacks and cost management become critical, given the high operational expense of AI applications compared to traditional ones. Moreover, increased data sharing with AI apps for tasks like summarization and pattern analysis will require more sophisticated data leakage protection.
In the future, AI gateways will need to support both reverse and forward proxies, with forward proxies playing a critical role in the short term as AI consumption outpaces AI production. Middle proxies will also be essential in managing interactions between components within AI applications, such as between vector databases and large language models (LLMs).
Most pressing are the ability to not only address traditional security concerns around data (exfiltration, leakage) but ethical issues with hallucinations and bias. No one is surprised to see the latter ranked as significant risks in nearly every survey on the subject.
Ken Arora, F5 Distinguished Engineer
2025 Technology #5: Small Language Models
Given the issues with hallucinations and bias, it would be unthinkable to ignore the growing use of retrieval-augmented generation (RAG) and Small Language Models (SLMs). RAG has rapidly become a foundational architecture pattern for generative AI.
Organizations not already integrating retrieval augmented generation (RAG) into their AI strategies are missing significant improvements in data accuracy and relevancy, especially for tasks requiring real-time information retrieval and contextual responses. But as the use cases for generative AI broaden, organizations are discovering that RAG alone cannot solve some problems.
The growing limitations of LLMs, particularly their lack of precision when dealing with domain-specific or organization-specific knowledge, are accelerating the adoption of small language models. While LLMs are incredibly powerful in general knowledge applications, they often falter when tasked with delivering accurate, nuanced information in specialized fields.
This gap is where SLMs shine, as they are tailored to specific knowledge areas, enabling them to deliver more reliable and focused outputs. Additionally, SLMs require significantly fewer resources in terms of power and computing cycles, making them a more cost-effective solution for businesses that do not need the vast capabilities of an LLM for every use case.
Lori MacVittie, F5 Distinguished Engineer
Looking ahead: beyond transformers
Transformer models, while powerful, have limitations in scalability, memory usage, and performance, especially as the size of AI models increases.
As a result, a new paradigm is emerging; converging novel neural network architectures with revolutionary optimization techniques that promise to democratize AI deployment across various applications and devices.
The AI community is already witnessing early signs of post-transformer innovations in neural network design. These new architectures aim to address the fundamental limitations of current transformer models while maintaining or improving their remarkable capabilities in understanding and generating content.
Among the most promising developments is the emergence of highly optimized models, particularly 1-bit large language models. These innovations offer dramatic reductions in memory requirements and computational overhead while maintaining model performance despite reduced precision.
The impact of these developments will cascade through the AI ecosystem. Models that once demanded substantial computational resources and memory will operate efficiently with significantly lower overhead. This optimization will trigger a shift in computing architecture, with GPUs potentially becoming specialized for training and fine-tuning tasks while CPUs handle inference workloads with newfound capability.
Kunal Anand, Chief Innovation Officer