Is Your Privacy Governance Ready for AI?
你的隐私治理为人工智能做好准备了吗?

梁晨    广西师范大学漓江学院
时间:2025-09-28 语向:英-中 类型:商务 字数:398
  • Is Your Privacy Governance Ready for AI?
    人工智能来了,你的隐私管控做好准备了
  • Post
    张贴
  • Post
    张贴
  • Share
    分享
  • Annotate
    注释
  • Save
    节省
  • Print
    打印
  • Consider what governance your organization needs to move past regulatory-driven privacy protection into data and data-use ethics. Data, data use, and AI ethics involve more than privacy. Some organizations are adopting principles around explainability, societal benefit, and fairness, among other principles. Identify which principles are relevant, and more importantly, what these principles mean to your organization. Get wide executive agreement on these principles, and translate them into concrete standards and procedures for each practice within your organization to enact trust-driven approaches.
    考虑您的组织需要什么样的管控才能从法规驱动的隐私保护转向数据和数据使用道德。数据、数据使用和人工智能伦理涉及的不仅仅是隐私。一些组织正在采用围绕可解释性、社会效益和公平性等原则。确定哪些原则是相关的,更重要的是,这些原则对您的组织意味着什么。获得高管对这些原则的广泛认同,并将其转化为组织内每项实践的具体标准和程序,以制定信任驱动的方法。
  • Appreciate the differences between AI and software. Find the best connection points with AI development teams, given their experimentation-driven approach. There should be oversight as to where and when in the development process data is accessed, cleaned, manipulated, augmented, protected. Doing so can enable more thorough analysis of broader ethical considerations—privacy and beyond.
    欣赏AI和软件的区别。鉴于人工智能开发团队的实验驱动方法,找到他们的最佳连接点。应该监督在开发过程中何时何地访问、清理、操纵、扩充和保护数据。这样做可以对更广泛的道德考虑进行更彻底的分析——隐私等等。
  • Build off what you have. No one wants burdensome governance and compliance. But expanding governance practices unnecessarily can lead to poor compliance down the line. Building off the processes that already exist can help reduce any friction and change management required to gain effective oversight. Most use of personal data already requires a Privacy Impact Assessment or a GDPR-required Data Protection Impact Assessment in higher risk data scenarios. Augment these requirements with additional questions relating to AI, and mandate their use across all AI development. Place particular focus on questions to identify and assess the likelihood and significance of benefits, risks, and mitigation controls. Recognize this action to be a benefit-risk decision-making tool, and bring a broad cross-section of internal stakeholders to the decision input.
    建立在你所拥有的基础上。没有人想要繁琐的管控和合规。但是,不必要地扩展管控实践可能会导致不良的合规性。建立现有的流程有助于减少获得有效监督所需的任何摩擦和变更管理。在高风险数据场景中,大多数个人数据的使用已经需要进行隐私影响评估或GDPR要求的数据保护影响评估。用与人工智能相关的额外问题来增强这些要求,并强制在所有人工智能开发中使用它们。特别关注问题,以识别和评估益处、风险和缓解控制的可能性和重要性。认识到这一行动是一种收益-风险决策工具,并让广泛的内部利益相关者参与决策。
  • Align on language. And don’t go it alone: anticipate any differences in how terminology is used between privacy and AI teams. Educate and collaborate to develop a shared understanding of these terms. Collaborative governance with a multilayer approach can help reduce friction and provide the broad range of perspectives necessary for your organization to develop robust and lasting mechanisms for ethical AI.
    在语言上保持一致。不要单干:预测隐私和人工智能团队之间术语使用的任何差异。教育和合作,以形成对这些术语的共同理解。采用多层方法的协作管控有助于减少摩擦,并为您的组织提供必要的广泛视角,以开发强大而持久的道德人工智能机制。
  • Use AI techniques to address privacy issues. While AI introduces a number of complications in dealing with privacy, it also provides a few innovative ways of solving privacy issues. New techniques like homomorphic encryption and differential privacy enable data sharing through encryption or noise-induction. Federated learning allows insights to be generated locally and aggregated without revealing private data. As an active space for research, AI will undoubtedly yield new applications that may enable privacy protection.
    使用人工智能技术解决隐私问题。虽然人工智能在处理隐私方面引入了许多复杂性,但它也提供了一些解决隐私问题的创新方法。同态加密和差分隐私等新技术通过加密或噪声诱导实现了数据共享。联合学习允许在本地生成和聚合见解,而不会泄露私有数据。作为一个活跃的研究空间,人工智能无疑将产生可能实现隐私保护的新应用。

400所高校都在用的翻译教学平台

试译宝所属母公司