Last updated: Nov 17, 2025, 05:25 PM UTC

LLM Processing Privacy Policy

Data Flow, Network Boundaries & Privacy Protection

Status: Complete
Version: 1.1
Last Updated: 2025-01-12
Applies To: All Knowcode Ltd LLM and AI tool usage


Executive Summary

Complete Transparency in AI Data Processing

This policy provides detailed information about what data sits where, what information crosses network boundaries, and how privacy is protected when using Large Language Models (LLMs) and AI tools in Knowcode Ltd operations.

Critical Insight: Understanding data flows is essential for maintaining security, compliance, and client trust in AI-powered development workflows.


LLM Data Flow Architecture

Complete Data Journey Mapping

graph TD subgraph " Local Environment" A[Developer Workstation] B[Local Files & Code] C[Environment Variables] D[Configuration Files] E[Git Repository] end subgraph " Network Boundary" F[Corporate Firewall] G[VPN Gateway] H[TLS Encryption Layer] end subgraph " External LLM Services" I[Anthropic Claude Code] J[OpenAI API Services] K[Other AI Providers] end subgraph " Data Processing" L[AI Model Processing] M[Response Generation] N[Temporary Storage] O[Analytics & Logging] end A --> F B --> I C -.->|"Protected by Default"| A D --> I E --> I F --> H G --> H H --> I H --> J H --> K I --> L J --> L K --> L L --> M M --> N M --> O N -.->|"Temporary Only"| L style A fill:#f0fdf4,color:#000 style C fill:#fefce8,color:#000 style H fill:#fefce8,color:#000 style L fill:#fdf2f8,color:#000 style N fill:#fefce8,color:#000

Data Location Matrix

STAYS LOCAL

Always Protected

  • SSH keys and certificates
  • Database credentials
  • API keys and secrets
  • Personal identification data
  • Proprietary algorithms (unless explicitly shared)

Local Processing

  • File system metadata
  • Local configuration preferences
  • Cached responses
  • Development environment settings
  • Git history and branches

CROSSES NETWORK

Transmitted to LLM Services

  • Source code files (when read)
  • Natural language prompts
  • Error messages and logs
  • File structure information
  • Development context data

Potentially Sensitive

  • Business logic and algorithms
  • Custom implementations
  • Client-specific code
  • Internal processes
  • Architecture patterns

REMOTE PROCESSING

AI Provider Infrastructure

  • LLM model processing
  • Response generation
  • Temporary data storage
  • Usage analytics
  • Error reporting

Retention Periods

  • 30-day standard retention
  • Zero retention options available
  • Processing memory only
  • No permanent model training

Network Boundary Analysis

What Crosses the Corporate Network

Detailed Network Flow Analysis

Outbound Data Flows

sequenceDiagram participant Dev as Developer participant Local as Local Machine participant Corp as Corporate Network participant Internet as Internet participant LLM as LLM Provider Dev->>Local: Execute AI command Local->>Local: Read local files Local->>Corp: Encrypt data packet Corp->>Corp: Firewall & security checks Corp->>Internet: Forward through VPN Internet->>LLM: TLS encrypted transmission Note over Corp: Corporate security controls apply Note over Internet: Data encrypted in transit Note over LLM: Processed on remote servers LLM-->>Internet: Encrypted response Internet-->>Corp: Return through VPN Corp-->>Local: Security validated response Local-->>Dev: Display results

Network Security Layers

Layer Protection Data State Controls
Local Machine OS-level permissions Plaintext files File system access controls
Corporate Firewall Network filtering Encrypted packets Traffic monitoring & filtering
VPN Gateway Network tunneling Encrypted tunnel Corporate network policies
Internet Transit TLS encryption Encrypted in transit Certificate validation
LLM Provider Provider security Processed remotely Provider privacy policies

Critical Network Boundary Considerations

Data Exposure Risks

Internet Transmission

  • All prompts and code sent over internet
  • Potential for network interception
  • Dependency on external service availability
  • Possible service provider data breaches

Corporate Visibility

  • Network logs may capture metadata
  • Firewall logs show connection patterns
  • VPN logs record data transfer volumes
  • Security teams can monitor AI tool usage

Service Provider Logging

  • Usage patterns tracked by LLM providers
  • Error logs may contain sensitive context
  • Analytics data collection
  • Potential compliance audit trails

Protection Measures

Encryption Standards

  • TLS 1.3 encryption for all transmissions
  • Certificate pinning where supported
  • End-to-end encryption maintenance
  • Regular security protocol updates

Access Controls

  • Multi-factor authentication requirements
  • API key rotation and management
  • Network access restrictions
  • User permission management

Monitoring & Auditing

  • Comprehensive usage logging
  • Regular security assessments
  • Compliance monitoring
  • Incident response procedures

Service Provider Data Handling

Anthropic (Claude Code)

Anthropic Data Processing Details

Data Handling Practices

  • Default Policy: No training on user code or conversations
  • Retention: 30-day automatic deletion from backend systems
  • Local Storage: Up to 30 days on user devices for session resumption
  • Zero Retention: Available for enterprise customers with special API keys

Security Measures

  • Encryption: TLS encryption for all data in transit
  • Access Controls: Strict server-side access limitations
  • Processing: Temporary processing memory only, no persistent storage
  • Compliance: SOC 2 Type II compliance and regular security audits

Data Locations

  • Primary Processing: United States (specific regions may vary)
  • Backup Systems: Geographic redundancy for service reliability
  • Legal Jurisdiction: Governed by Anthropic's terms of service
  • Data Residency: Confirm with Anthropic for specific regulatory requirements

Other LLM Providers

OpenAI Services

Data Handling (as of 2024)

  • Training Policy: Data not used for model training by default (with API)
  • Retention: 30-day retention period for abuse monitoring
  • Zero Retention: Available for enterprise customers
  • Location: Primarily US-based processing

Security Features

  • TLS encryption in transit
  • SOC 2 Type II compliance
  • Regular security assessments
  • Enterprise-grade security controls

Cloud Provider LLMs

Enterprise Options

  • AWS Bedrock: Data stays within your AWS account
  • Google Vertex AI: Processed within Google Cloud infrastructure
  • Azure OpenAI: Data remains in your Azure tenant

Enhanced Control

  • Customer-managed encryption keys
  • VPC/private network connectivity
  • Detailed audit logging
  • Regional data residency options

User Rights & Privacy Controls

Individual Privacy Rights

Your Data Rights

Access & Control Rights

  • Right to Know: What data is collected and how it's used
  • Right to Access: Review data that providers have about you
  • Right to Delete: Request deletion of your data from provider systems
  • Right to Correct: Update or correct inaccurate information
  • Right to Export: Download your data in portable formats

Opt-Out Mechanisms

  • Telemetry Opt-Out: Disable usage analytics and error reporting
  • Training Opt-Out: Ensure data is not used for model training
  • Data Retention Opt-Out: Use zero retention services where available
  • Service Opt-Out: Discontinue use of AI services entirely

Implementation Controls

Technical Controls

Environment Configuration

# Disable telemetry across all tools
export DISABLE_TELEMETRY=true
export DISABLE_ERROR_REPORTING=true
export DISABLE_BUG_COMMAND=true

# Use zero retention API keys
export ANTHROPIC_API_KEY="zero-retention-key"
export OPENAI_API_KEY="enterprise-zero-retention-key"

File Access Controls

# Restrict Claude Code to specific directories
cd /safe/project/directory
claude-code --restrict-access

# Review files before AI processing
claude-code --confirm-file-access

Policy Controls

Organizational Policies

  • Clear AI tool usage guidelines
  • Data classification and handling procedures
  • Approval workflows for sensitive projects
  • Regular privacy impact assessments

User Training

  • Privacy awareness training
  • Data handling best practices
  • Incident reporting procedures
  • Regular policy updates and reviews

Compliance & Regulatory Considerations

Global Privacy Regulations

🇪🇺 GDPR Compliance (European Union)

graph TD A[EU Personal Data] --> B{GDPR Assessment} B -->|Personal Data Identified| C[GDPR Protections Required] B -->|No Personal Data| D[Standard Processing] C --> E[Lawful Basis Required] C --> F[Data Subject Rights] C --> G[Data Protection Impact Assessment] E --> H[Legitimate Interest/Consent] F --> I[Access, Rectification, Erasure] G --> J[Risk Mitigation Measures] style A fill:#f0f9ff,color:#000 style C fill:#fefce8,color:#000 style D fill:#f0fdf4,color:#000

GDPR Considerations for LLM Usage

  • Personal Data Definition: Any information relating to identified or identifiable individuals
  • Processing Basis: Requires legitimate interest or explicit consent for AI processing
  • Data Subject Rights: Must provide access, correction, and deletion mechanisms
  • Cross-Border Transfers: Special requirements for data leaving EU/EEA
  • Impact Assessments: Required for high-risk AI processing activities

Industry-Specific Regulations

Healthcare (HIPAA)

High Risk

  • PHI cannot be sent to most LLM providers
  • Requires HIPAA-compliant AI services
  • Business Associate Agreements needed
  • Strict access controls and audit trails

Mitigation

  • Use HIPAA-compliant AI services
  • Data anonymization before processing
  • Secure, private cloud deployments
  • Regular compliance audits

Financial (SOX/PCI)

Moderate Risk

  • Financial data requires special handling
  • Audit trail requirements
  • Access control documentation
  • Change management procedures

Controls

  • Separate environments for financial systems
  • Enhanced logging and monitoring
  • Regular security assessments
  • Compliance documentation

Government/Defense

Highest Risk

  • Classified data cannot use external LLMs
  • Strict data residency requirements
  • Security clearance implications
  • FedRAMP compliance needed

Requirements

  • On-premises or government cloud only
  • Security clearance for all personnel
  • Continuous monitoring
  • Regular security assessments

Security Best Practices

Implementation Guidelines

Comprehensive Security Framework

Organizational Level

  1. Data Classification: Identify and classify all data types before AI processing
  2. Risk Assessment: Regular evaluation of AI tool risks and benefits
  3. Policy Development: Clear guidelines for AI tool usage and data handling
  4. Training Programs: Regular education on privacy and security best practices
  5. Incident Response: Prepared procedures for privacy and security incidents

Individual Level

  1. Data Awareness: Understand what data you're sharing with AI tools
  2. Tool Configuration: Properly configure privacy and security settings
  3. Access Controls: Use strong authentication and limit tool access
  4. Regular Reviews: Periodically review AI tool usage and data exposure
  5. Incident Reporting: Immediately report suspected privacy or security issues

Technical Implementation

Recommended Practices

Network Security

  • Use corporate VPN for all AI tool access
  • Implement network monitoring and logging
  • Regular security updates and patches
  • Certificate pinning where possible

Data Management

  • Regular data classification reviews
  • Automated sensitive data detection
  • Secure deletion procedures
  • Backup and recovery planning

Access Control

  • Multi-factor authentication
  • Role-based access controls
  • Regular access reviews
  • API key rotation

Common Pitfalls

Security Risks

  • Using AI tools on unsecured networks
  • Sharing API keys between team members
  • Processing sensitive data without classification
  • Ignoring privacy settings and defaults

Compliance Issues

  • Lack of data processing documentation
  • Insufficient impact assessments
  • Missing consent mechanisms
  • Inadequate data subject rights implementation

Technical Problems

  • Outdated security configurations
  • Unmonitored AI tool usage
  • Inadequate logging and auditing
  • Poor incident response procedures

Monitoring & Audit Framework

Continuous Monitoring

Monitoring Strategy

Usage Monitoring

  • AI tool access patterns and frequency
  • Data volume and type analysis
  • User behavior and compliance monitoring
  • Service provider usage tracking

Security Monitoring

  • Network traffic analysis for AI services
  • Anomaly detection in usage patterns
  • Security incident detection and response
  • Regular vulnerability assessments

Compliance Monitoring

  • Privacy policy adherence checking
  • Regulatory requirement compliance
  • Data retention policy enforcement
  • User rights request handling

Audit Requirements

Regular Audits

Quarterly Reviews

  • AI tool usage patterns
  • Privacy policy compliance
  • Security control effectiveness
  • User training completion

Annual Assessments

  • Comprehensive privacy impact assessment
  • Security posture evaluation
  • Regulatory compliance review
  • Third-party service provider evaluation

Documentation

Required Records

  • Data processing activities
  • Privacy impact assessments
  • Security incident reports
  • User consent and opt-out records

Audit Trails

  • AI tool access logs
  • Data transfer records
  • Configuration changes
  • Policy updates and communications

Reporting

Regular Reports

  • Monthly usage summaries
  • Quarterly compliance reports
  • Annual privacy assessments
  • Incident response summaries

Stakeholder Communication

  • Executive dashboard updates
  • Team training reports
  • Client privacy notifications
  • Regulatory filing requirements

📞 Privacy Support & Contacts

Privacy Questions & Support

Knowcode Ltd Privacy Team
General Privacy Questions: privacy@knowcode.co.uk
Data Subject Rights Requests: privacy@knowcode.co.uk
Privacy Impact Assessments: privacy@knowcode.co.uk

Privacy Incident Reporting
Immediate Response: security@knowcode.co.uk
Security Breaches: security@knowcode.co.uk

External Resources
Anthropic Privacy Center: https://privacy.anthropic.com
OpenAI Privacy Policy: https://openai.com/privacy
GDPR Information: https://gdpr.eu

We're committed to maintaining the highest standards of privacy protection and transparent communication about data handling practices.


Privacy Policy Summary

This LLM Processing Privacy Policy provides complete transparency about data flows, network boundaries, and privacy protections when using AI tools, enabling informed decision-making and maintaining compliance with privacy regulations and organizational policies.

The result: Clear understanding of what data goes where, robust privacy protections, and comprehensive compliance framework for AI-powered operations.


Document Control Information

  • Classification: Internal Use / Compliance
  • Distribution: All teams using AI tools, Legal, Security, Compliance
  • Review Authority: Privacy Officer, Legal Department, Security Team
  • Next Review: 2026-01-12 (or upon significant service changes)
  • Document Version: 1.1
  • Related Policies: IP Ownership Framework, Claude Code Data Handling