Appleās ML Frameworks team in GPU, Graphics and Displays org provides GPU acceleration for popular Machine learning libraries such as TensorFlow, PyTorch and JAX using Metal runtime and device backend. It optimizes compute performance with kernels and computational graphs that are fine-tuned for the unique characteristics of each Metal GPU family. We are always looking for exceptionally dedicated individuals to grow our outstanding team.
Our team is seeking extraordinary machine learning and GPU programming engineers who are passionate about providing robust compute solutions for accelerating Machine learning libraries on Apple Silicon. Role has the opportunity to influence the design of compute and programming models in next generation GPU architectures.
* Responsibilities:
* Design and develop compiler based optimizations for Metal backend in ML frameworks such as torch.compile for PyTorch
* Work on cutting-edge ML inference framework project and optimize code for efficient and scalable ML inference using distributed techniques
* Implement features of Metal device backend for ML training acceleration technologies
* Work with Core teams of PyTorch, JAX or Tensorflow to provide Metal runtime and device backend support
* Tune GPU-accelerated training across products.
* Performing in-depth analysis, compiler and kernel level optimizations to ensure the best possible performance across hardware families.
* Intended deliverables:
* GPU accelerated ML Frameworks technology
* Optimized ML training across products.
If this sounds of interest, we would love to hear from you!
3+ years of programming and problem-solving experience with C/C++/ObjCExperience with Distributed training or inference techniquesGPU compute programming models & optimization techniquesExperience with system level programming and computer architecture