As more teams integrate APIs from providers like OpenAI, Anthropic, and other LLM services, it seems possible for sensitive data to accidentally end up in prompts.
Some questions I’m trying to understand:
• Do companies route AI API traffic through some internal gateway or proxy? • How do you prevent sensitive information (customer data, credentials, internal documents) from being sent to external models? • Is AI usage across teams actually tracked anywhere? • If an auditor asked how AI systems are governed in your company, would you have a clear answer?
I’d be interested to hear how teams are currently handling this in practice.