AIML – ML Researcher, Safety & Red Teaming – 200563108 -Pittsburgh, Pennsylvania, United States

Apple

Join us as we build world-class groundbreaking products for our customers! Apple’s Data and ML Innovation team focuses on innovative technologies, methodologies, and research to enable fantastic user experiences and advance the frontier of machine learning.

Our team is looking to hire a tech lead with a strong track record in Applied Research who is passionate about ML and Human-Computer Interaction – mainly as applied to the responsibility, fairness, and safety of Generative AI. In this role, you will lead research methods focused on enabling ML technologies that power breakthrough user experiences while upholding Apple’s values, privacy, and quality standards.

This role will be highly multifunctional. You will collaborate closely with top machine learning researchers and engineers, software engineers, and design teams to develop and deliver groundbreaking solutions for Apple products. We believe that the most exciting problems in machine learning research arise at the intersection of emerging technologies and real-world use cases. This is also where the most critical breakthroughs come from. As a researcher, you will:

– Define and deliver responsible machine learning technologies

– Research and advance red teaming methods for generative AI models

– Research and develop mitigations and safeguards to ensure safe deployment of LLM’s in Apple products

– Develop methods and frameworks to evaluate our products

– Advocate for scientific and engineering excellence: You will contribute to the architecture and high-level structure of Apple’s AI-powered platform and features

– Work multi-functionally on a diverse set of challenging problems and collaborate with extraordinary software engineers, machine learning engineers, and researchers to impact the future of Apple products

– Work closely with product and partner teams to drive requirement definition, technical quality of deliverables, and execution.

Strong research or product deployment record in areas related to responsible AI, with publications in top ML and HCI venues (e.g., ACL, CHI, CVPR, EMNLP, FAccT, ICML, Interspeech, NeurIPS, UIST, etc.)Strong research fundamentals, machine learning principles, and development methodologies around LLMs, foundation models, and diffusion modelsDeep experience in human-centered research – you understand design fundamentals in Human-Centered AI, Responsible AI, Human-Computer Interaction, and related fieldsWork with highly-sensitive content with exposure to offensive and controversial content

 

Job Overview