Data Privacy
Data Privacy and Cyber‑Security Threats in Generative AI
Alex Silonosov, Lawrence Henesey, Blekinge Institute of Technology
Introduction
The rapid growth of Generative AI (GenAI) creates new risks related to data confidentiality, regulatory exposure (GDPR, EU AI Act), system vulnerabilities, and unintentional leakage. SMEs are increasingly using GenAI without cybersecurity guidance, leading to Shadow AI behavior and increased risk.
Generative AI Tools for SME Use‑Cases
| Use Case | GenAI Tool | Service Model | Data Input | Output |
|---|---|---|---|---|
| Marketing Images | Microsoft Copilot | SaaS | Text, images, docs | Text, images, documents |
| MidJourney | SaaS | Text, image | Images, video | |
| ChatGPT | SaaS / Self‑hosted | Text, images, documents | Text, documents, images | |
| Create Website | Wix.com | SaaS | Text prompt | Website structure |
| Tax Management | taxhacker.app | SaaS | Receipt images | VAT aggregation |
Data Privacy & Cybersecurity Risks
| Risk | Description | Examples |
|---|---|---|
| Data Confidentiality | GenAI tools may store uploaded files and use them for training. | Drawings, internal docs |
| Legal Exposure | Uploading personal or contractual data can violate GDPR or NDAs. | Invoices, HR records |
| Content Reliability | GenAI outputs may hallucinate or provide incorrect results. | Misinterpreted images, wrong market data |
| Data Leakage | AI assistants may access calendars, emails, or stored conversations. | MS Teams transcript bots |
| Authentication Leakage | Use of personal Gmail/GitHub accounts creates cross‑device exposure. | Documents accessible from home devices |
Practical Tips (Internet‑Based GenAI)
Threat Taxonomy for Integrated / Self‑Hosted GenAI
| Threat Model | Attack | Impact |
|---|---|---|
| Supply Chain Attack | Compromised containers, API flaws | On‑premise compromise |
| AI Scam Websites | Fake AI tools containing malware | Data theft, remote access |
| Data Poisoning | Malicious training data injection | Backdoors, unreliable outputs |
| Prompt Injection | Bypassing guardrails, leaking system prompts | Unauthorized access & actions |
Conclusion
GenAI tools—cloud‑based or self‑hosted—must be governed with clearly assigned risk ownership, defined cyber‑security controls, and documented usage policies. SMEs that adopt structured AI governance gain productivity advantages while minimizing cyber, legal, and reputational risks.

