Machine Learning and Platforms (MLPT) team is in Apple’s AIML Org. MLPT’s On-device machine learning (ML) team builds the inference stack that runs all ML networks on Apple Silicon. In this team, we write the converter and compiler that translate a source network definition to one that execution units in hardware can interpret. We write tools for network optimizations, write the runtime that schedules and manages the execution on hardware as well as provide guidance for hardware/software co-design of current and future workloads alongside hardware accelerators. The team works cross-functionally with several partner teams inside Apple (such as CPU, GPU, Neural Engine, speech understanding, Camera, Photos, VisionPro) as well as external App-Developers. Core ML is an example of an external facing product from this team. If this role sounds exciting, we want to hear from you!
Machine Learning and Platforms Technology (MLPT) team is in Apple’s AIML Org. MLPT’s On-device machine learning (ML) team builds the inference stack that runs all ML networks on Apple Silicon. In this team, we write the converter and compiler that translate a source network definition to one that execution units in hardware can interpret. We write tools for network optimizations, write the runtime that schedules and manages the execution on hardware as well as provide guidance for hardware/software co-design of current and future workloads alongside hardware accelerators. The team works cross-functionally with several partner teams inside Apple (such as CPU, GPU, Neural Engine, speech understanding, Camera, Photos, VisionPro) as well as external App-Developers. Core ML is an example of an external facing product from this team.
In this role you’ll be digging into the latest research about efficient on device inference. You’ll prototype new approaches to improve inference on critical models without sacrificing accuracy. You’ll do deep dive analysis of both our software stack as well as our hardware and come up with innovative ways to improve. You’ll also look at ML inference performance across a range of devices from small wearables up to the largest Apple Silicon Macs.
Understand the basics of ML, keeping up with innovative research in some area of ML, and are familiar with adapting and training neural networks — experience developing code in one or more of training frameworks (such as PyTorch, TensorFlow or JAX)Have knowledge of computer architecture (CPU and GPU), understand performance modeling and analysis of computer systems, and how to optimize code for a given platformHave experience with ML systems, particularly for on-device inference scenariosKnow how to perform comprehensive analyses (for performance, power, accuracy, etc.) starting from first principles of various deep learning techniques and benchmarking to test/prove ideas; system optimizations including building out analytical models as well as implementing prototypesHave a passion for software architecture, APIs and high performance extensible software;Programming and software design skills (proficiency in C/C++ and/or Python)Are collaborative and product-focused with excellent communication skills