Back

Privacy Engineer III, Machine Learning

Google

  • Full time
  • Bangalore, Karnataka, India
  • On-site
  • Job Description

    Minimum qualifications:

    • Bachelor's degree or equivalent practical experience.
    • 2 years of experience designing solutions that maintain or enhance the privacy posture of the organization by analyzing and assessing proposed engineering designs (e.g., product features, infrastructure systems) and influencing stakeholders.
    • 2 years of experience applying privacy technologies (e.g., differential privacy, automated access management solutions, etc.), and customizing existing solutions and frameworks to meet organizational needs.

    Preferred qualifications:

    • Experience managing multiple high priority requests, while determining resource allocation (e.g., time, prioritization) to solve the problems in a fast-paced, changing organization.
    • Experience in end-to-end development of ML models and applications.
    • Knowledge of common regulatory frameworks (e.g., GDPR, CCPA).
    • Understanding of privacy principles, and a passion for keeping people and their data safe.

    About the job

    The Governance team manages risk and compliance objectives, specifically risks about data, products, and software systems within Google. Our aim is to ensure that systems, products, and data are managed responsibly to keep our users, employees, and partners safe.

    Google's innovations in AI, especially Generative AI, have created a new and exciting domain with immense potential. As innovation moves forward, Google and the broader industry need increased privacy, safety, and security standards for building and deploying AI responsibly.

    To help meet this need, the Generative AI Assessments team's mission is to build up Google's assessment capabilities for generative AI applications.

    Responsibilities

    • Conduct privacy impact assessments and drive privacy outcomes for artificial intelligence datasets, models, products, and features
    • Escalate critical and novel artificial intelligence risks to central and product leadership forums, as needed.
    • Design and develop technical documentation across teams to drive consistent privacy decisions within the artificial intelligence domain.
    • Work with internal tools and systems for understanding and assessing machine learning data and model lineage, properties, and risks.