As AI vendors automatically opt customers into training their large language models, concerns about ethical and privacy issues arise. Companies like Slack, Adobe, and Google Gemini default to using customer data for training, prompting discussions around compliance with GDPR, transparency, and user consent regarding the training data. Organizations must carefully document and communicate their AI training processes to comply with privacy regulations and address concerns over personal data usage, transparency, and accountability

AI Vendors' Auto Opt-In Raises Questions

The debate includes discussions on user consent, compliance with GDPR, responsibilities for generative AI outputs, and compatibility with 'right to be forgotten' laws. The role of legal and privacy experts, industry consultations, and education on AI risks and opportunities will play a significant part in shaping future regulations and practices surrounding AI training., ```
https://www.bankinfosecurity.com/blogs/training-llms-questions-rise-over-ai-auto-opt-in-by-vendors-p-3625