Senior Software Framework Engineer – System Intelligent and Machine Learning, ISE – 200565547 -Cupertino, California, United States

Apple

Are you interested in making an extensive and impactful contribution to Machine Learning at Apple? Perceiving and reasoning about the world around us, through images/video, text, and/or other signals, is fundamental to crafting engaging user experiences that impact the lives of millions of our customers. Our Scene Understanding (SUN) team strives to apply cutting edge Machine learning in innovative, inclusive, and impactful ways that address problems that our users deeply care about. We are part of the System Intelligent and Machine Learning (SIML) group that provides foundational computer vision and machine learning technologies to Appleā€™s ecosystem. Our work is behind essential features such as Camera, Text & Handwriting recognition, and Apple Intelligence experiences (Image Playground, Writing Tools, Smart Script, Math Notes).

The SUN team is seekeing a technical lead in ML software engineering. The primary responsibilities associated with this position range from algorithm design and implementation, ability to integrate research into production frameworks, and collaborating closely with product teams before and after feature launch.

Here are a selection of relevant Apple Machine learning Blogs and WWDC presentations:
https://machinelearning.apple.com/research/on-device-scene-analysis
https://machinelearning.apple.com/research/salient-object-segmentation
https://machinelearning.apple.com/research/panoptic-segmentation
https://developer.apple.com/videos/play/wwdc2019/222/
https://developer.apple.com/videos/play/wwdc2019/225/

The Scene Understanding team is seeking a senior software/ML engineer with a proven track record in shipping customer experiences. You will be interacting very closely with a variety of ML researchers, software engineers, hardware & design teams cross functionally. Among the most important requirement would be a deep understanding of software fundamentals, and the ability to translate ML algorithms into production quality code. Solutions developed will leverage multi-modal inputs (visual, range, nlp, audio) with a strong emphasis on visual processing.

Our team contributes to a variety of shipping workflows you may already regularly use, including: Apple Intelligence, Photos Search, Curation, Memories, Intelligent Autocrop, Visual Captioning for Accessibility, Federated Learning on visual content, Real-time Classification & Saliency in Camera, Semantic Segmentation in Camera, and several on-device backbones across the system. Further, several of our projects are surfaced to third party developers through Vision & CoreML. Shipping APIs include image tagging, image similarity, saliency estimation and prints for transfer learning.

IN THIS ROLE YOU WILL:
– Lead and guide in the development of our production framework that exposes ML core technologies to clients across Apple and to third party developers.
– Work closely with Machine learning researchers, hardware teams, user experience/design teams, etc in order to ensure these ML core technologies run as efficiently, and accurately as possible across a variety of Apple platforms (iOS, mac OS, vision OS, etc).
– Iterate with multiple cross functional teams as we refine various user experiences and strive to address meaningful user problems as innovatively and inclusively as possible.

5+ years of industry experience with proven leadership in framework developmentAbility to design/implement flexible APIs that will expose machine learning algorithms to clients across Apple platformsProficient in coding in C++ and/or Objective CProven experience with hands-on software engineering fundamentalsExperience with multiple modalities (image, text, audio, etc)Proven prototyping skillsUnderstanding of the unique challenges associated to the transition of a prototype into a final productFamiliar with the challenges of developing algorithms that run efficiently on resource constrained platforms

 

Job Overview