Data Privacy

GenAI Data Privacy & Cyber‑Security Threats – AIKnowIT

Data Privacy and Cyber‑Security Threats in Generative AI

Alex Silonosov, Lawrence Henesey, Blekinge Institute of Technology

Introduction

The rapid growth of Generative AI (GenAI) creates new risks related to data confidentiality, regulatory exposure (GDPR, EU AI Act), system vulnerabilities, and unintentional leakage. SMEs are increasingly using GenAI without cybersecurity guidance, leading to Shadow AI behavior and increased risk.

Generative AI Tools for SME Use‑Cases

Use CaseGenAI ToolService ModelData InputOutput
Marketing ImagesMicrosoft CopilotSaaS Text, images, docsText, images, documents
MidJourneySaaS Text, imageImages, video
ChatGPTSaaS / Self‑hosted Text, images, documentsText, documents, images
Create WebsiteWix.comSaaS Text promptWebsite structure
Tax Managementtaxhacker.appSaaS Receipt imagesVAT aggregation

Data Privacy & Cybersecurity Risks

RiskDescriptionExamples
Data Confidentiality GenAI tools may store uploaded files and use them for training. Drawings, internal docs
Legal Exposure Uploading personal or contractual data can violate GDPR or NDAs. Invoices, HR records
Content Reliability GenAI outputs may hallucinate or provide incorrect results. Misinterpreted images, wrong market data
Data Leakage AI assistants may access calendars, emails, or stored conversations. MS Teams transcript bots
Authentication Leakage Use of personal Gmail/GitHub accounts creates cross‑device exposure. Documents accessible from home devices

Practical Tips (Internet‑Based GenAI)

Tip 1: Define “no‑upload data”: HR, medical, financial, personal, or confidential material.
Tip 2: Check EU AI Act categories (high‑risk, banned systems).
Tip 3: All AI‑generated content must undergo human review.
Tip 4: Keep an internal GenAI usage log for compliance and auditing.
Tip 5: Mitigate data leaks: use self‑hosted solutions and enforce corporate accounts + MFA.

Threat Taxonomy for Integrated / Self‑Hosted GenAI

Threat ModelAttackImpact
Supply Chain Attack Compromised containers, API flaws On‑premise compromise
AI Scam Websites Fake AI tools containing malware Data theft, remote access
Data Poisoning Malicious training data injection Backdoors, unreliable outputs
Prompt Injection Bypassing guardrails, leaking system prompts Unauthorized access & actions

Conclusion

GenAI tools—cloud‑based or self‑hosted—must be governed with clearly assigned risk ownership, defined cyber‑security controls, and documented usage policies. SMEs that adopt structured AI governance gain productivity advantages while minimizing cyber, legal, and reputational risks.

© 2026 AIKnowIT · All Rights Reserved