Important Security Reminder on the Use of AI Tools

by Information Security Unit

 

The Rise of AI Agents

Artificial Intelligence (AI) and AI agents (smart digital helper technology powered by AI) are rapidly gaining popularity across industries as powerful tools for automating tasks, improving productivity, and enhancing research capabilities.

Latest‑generation AI agents go beyond simple text responses by allowing large language models (LLMs) to perform real‑world actions. They can connect LLMs to tools, devices, and communication channels, enabling fully autonomous, multi‑step workflows that interact directly with systems and data.

As usage of these technologies grows, the University is closely monitoring their development to ensure they are adopted safely, securely, and responsibly.

Potential Security Risks of AI Agents

AI agents sound futuristic and helpful, but they can create serious security problems, especially when connected to real systems like email, files, or databases operating in production environment.

Think of an AI agent as a smart assistant that can act on its own—but if someone tricks it, the damage can be much bigger than a simple chatbot.

Here's what can go wrong, explained simply with everyday examples drawn from real‑world incidents:

Risk Category

What It Means

Real-World Example

Prompt Injection & Context Poisoning

Attackers hide secret instructions inside normal‑looking content (like a webpage, email attachment, or PDF). The AI reads this and follows the hidden commands instead of its normal rules, often without anyone noticing.

In early 2023, users tricked Microsoft's Bing Chat ("Sydney") into revealing its hidden system instructions using phrases like "ignore previous instructions." More recently, a 2025 enterprise RAG system breach let attackers extract protected data via injected prompts in documents.

Excessive Agency & Permissions

The AI gets too much power—like access to your entire email, shared drives, or admin tools—more than it needs for simple tasks. One mistake or trick can cause big problems.

A 2026 manufacturing procurement agent was manipulated to approve $3.2 million in fake orders after gaining excessive purchase authority through gradual permission escalation, going undetected for weeks.

Framework / Library Vulnerabilities

Bugs in the "behind‑the‑scenes" software that makes the AI work (like the code libraries it uses to connect to tools). Hackers exploit these bugs to steal data or run their own code on your computer.

LangChain (used in millions of AI apps) had critical flaws like CVE‑2025‑68664 (CVSS 9.3), allowing secret theft via unsafe serialization, and earlier CVEs enabling prompt injection and code execution.

Supply‑Chain & Plugin Compromise

The AI relies on outside parts (pre‑made tools, add‑ons, or models from the internet). If any of those are tampered with, they bring hidden dangers into your setup.

A 2026 supply‑chain attack on the OpenAI plugin ecosystem compromised credentials from 47 enterprises, harvesting customer data and code. Agent marketplaces showed 7.7% malicious plugins.

Local Data Privacy & Governance

The AI saves bits of your data (like summaries or search histories) on your personal computer or laptop, not just in the cloud. This makes it hard to control, delete, or protect that info.

Local LLMs on endpoints risk leaking PII via unfiltered actions (e.g., coding agents sharing stack traces online). Stolen devices expose cached sensitive notes, bypassing central controls.

 

security_risk_of_ai_agents

University Actions

University is actively conducting a comprehensive risk assessment of emerging AI‑agent technologies. As part of our commitment to safeguarding institutional data and systems, we will continue to:

  • Monitor security implications as these tools evolve
  • Adopt additional safeguards where necessary
  • Restrict or block technologies that pose unacceptable risk

Please note that, in line with the University’s Acceptable Usage and Information Security requirements, the University reserves the right to restrict, block, or disable access to any AI tools or services if critical security risks, vulnerabilities, or policy non‑compliance are identified, without prior notice.

What You Should Do

  • Observe and comply section 11 (Generative Artificial Intelligence) stipulated in University Acceptable Usage Standard
  • Make use of the AI tools and platforms officially provided by the University. To safeguard University information and systems, users are strongly advised to use only AI tools and platforms officially provided by the University, which have been subject to appropriate security and risk assessments and are governed by established policies and controls.
  • Avoid granting unnecessary or overly broad permissions to AI agents or related integrations.
  • Exercise caution when experimenting with or deploying AI agents, especially those obtained from external or unverified sources.
  • Not process sensitive University data (such as personal data, examination materials, research data under Non-Disclosure Agreement, or confidential administrative information) through any unassessed or unofficial AI tools.

Enquiry

Should you have any enquiry related to this article, please contact Information Security Unit at infosec@cityu.edu.hk.

Reference

  1. https://www.helpnetsecurity.com/2025/06/04/llm-agency/
  2. https://www.securityforum.org/in-the-news/excessive-agency-in-llms-the-growing-risk-of-unchecked-autonomy/
  3. https://blog.securelayer7.net/ai-agent-frameworks/
  4. https://www.enkryptai.com/blog/llm-agents-benefits-risks
  5. https://www.lasso.security/blog/llm-risks-enterprise-threats 
  6. https://www.preludesecurity.com/blog/key-risks-of-deploying-local-agents 
  7. https://www.lasso.security/blog/prompt-injection-examples 
  8. https://www.brside.com/blog/the-root-permissions-problem-why-agentic-ai-poses-unique-data-security-risks 
  9. https://www.eccouncil.org/cybersecurity-exchange/ethical-hacking/what-is-prompt-injection-in-ai-real-world-examples-and-prevention-tips/
  10. https://ragwalla.com/blog/prompt-injection-attacks-on-ai-agents-the-new-enterprise-vulnerability
  11. https://www.obsidiansecurity.com/blog/prompt-injection
  12. https://stellarcyber.ai/learn/agentic-ai-securiry-threats/  
  13. https://unit42.paloaltonetworks.com/langchain-vulnerabilities/
  14. https://thehackernews.com/2025/12/critical-langchain-core-vulnerability.html
  15. https://www.startupdefense.io/blog/agentic-ai-security-risks-and-defenses
  16. https://www.protecto.ai/blog/enterprise-llm-privacy-concerns/
  17. https://unit42.paloaltonetworks.com/ai-agent-prompt-injection/
  18. https://www.doeren.com/viewpoint/the-hidden-risk-in-enterprise-ai-prompt-injection-without-the-click
  19. https://noma.security/blog/the-risk-of-destructive-capabilities-in-agentic-ai/
  20. https://www.revel8.ai/blog/prompt-injection-how-hackers-hijack-aiagents

 

CityU IT ChatBotCityU IT ChatBot×