Are companies preventing sensitive data from being sent to external LLM APIs

  • Posted 6 hours ago by jayakrishna96
  • 2 points
I’m curious how engineering and security teams are handling governance around AI usage inside companies.

As more teams integrate APIs from providers like OpenAI, Anthropic, and other LLM services, it seems possible for sensitive data to accidentally end up in prompts.

Some questions I’m trying to understand:

• Do companies route AI API traffic through some internal gateway or proxy? • How do you prevent sensitive information (customer data, credentials, internal documents) from being sent to external models? • Is AI usage across teams actually tracked anywhere? • If an auditor asked how AI systems are governed in your company, would you have a clear answer?

I’d be interested to hear how teams are currently handling this in practice.

0 comments