Security for LLM Agents

Safer Assistance

© Lead Image © buraratn, 123RF.com

© Lead Image © buraratn, 123RF.com

Article from Issue 305/2026
Author(s): , Author(s):

Agentic LLM systems are susceptible to attack. We'll show you some steps you can take to mitigate the risk.

Agentic large language models (LLMs) offer a radically new approach to developing software by coordinating an entire ecosystem of agents in an imprecise conversation. This completely new way of working poses significant security risks, particularly due to manipulated prompts. A blog post [1] by renowned security specialist Bruce Schneier illustrates the problems: "We simply don't know how to defend against these attacks. We have zero agentic AI systems that are secure against these attacks. Any AI that is working in an adversarial environment – and by this I mean that it may encounter untrusted training data or input – is vulnerable to prompt injection. It's an existential problem that, near as I can tell, most people developing these technologies are just pretending isn't there."

To keep abreast of these risks, we comb through research articles written with a deep understanding of modern LLM-based tools and regularly summarize our findings in a blog [2], with the goal of providing an easy-to-understand practical overview of security issues and mitigations related to agent-based LLMs. Many risks are associated with agentic LLMs, and the technology is changing rapidly. It is therefore important to understand the risks and learn how to mitigate them whenever possible.

Agentic LLMs – a Definition of Terms

Artificial intelligence (AI) is changing rapidly, making its terms difficult to pin down. The term AI in particular is overused to cover everything from machine learning through LLMs to artificial general intelligence. The term agentic AI refers to LLM-based applications that can act autonomously. In order to act autonomously, AI agents extend the basic LLM model to include internal logic, loops, tool calls, background processes, and sub-agents. An agentic LLM taps a large number of data sources and can trigger activities with side effects (Figure 1).

[...]

Use Express-Checkout link below to read the full article (PDF).

Buy this article as PDF

Download Article PDF now with Express Checkout
Price $2.95
(incl. VAT)

Buy Linux Magazine

Related content

  • Skydive

    If you don't speak fluent Ethernet, it sometimes helps to get a graphical view of what your network is doing. Skydive offers visual insights that could reveal complex error patterns.

  • Wazuh

    This versatile security app checks for vulnerabilities, watches logs, and acts as a single interface for other tools.

  • News

    In the news: Linux Mint 22.2 Beta Available for Testing, Debian 13.0 Officially Released, Upcoming Changes for MXLinux, A New Linux AI Assistant in Town, Linux Kernel 6.16 Released with Minor Fixes, EU Sovereign Tech Fund Gains Traction, FreeBSD Promises a Full Desktop Installer, Linux Hits an Important Milestone, Plasma Bigscreen Returns.

  • Enhanced Security

    Verifying the security of your SSH configuration and performing regular audits are critical practices in maintaining a secure Linux environment.

  • A New Linux AI Assistant in Town

    Newelle, a Linux AI assistant, works with different LLMs and includes document parsing and profiles.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News