The AI/ML Integration and Delivery team is looking for a experiencedData Scientist. Our organization specializes in CI/CD and the delivery of AI models to servers and customer devices alike. Our team’s mission is to make the engineering workflow as transparent as possible for AIML engineers by providing metrics and engineering UIs. This requires us to produce and aggregate data from many sources, both emanating from our engineering systems and our customers’ devices, process it, report it and guarantee its quality and reliability. Something that sets this role apart from typical product DS roles is that it requires a deep interest in complex engineering system and a passion about measuring their efficiency and reliability with data (from our own pipelines all the way to customers’ devices).
We are looking for a strong data scientist to help us fulfill this mission by collaborating with a global team of data scientists and software engineers.
In this role, you will be expected to be the face of the metrics team for our external stakeholders owning KPIs end-to-end from definitions, instrumentation design to reporting. You will be expected to build a deep understanding of our overall engineering experience and CI/CD practices, and our data landscape, and bring everything together. You will be the expert of our org’s KPIs and work proactively to improve the usability. You will collaborate closely with Data Scientists, Release engineers, UI Engineers, Backend Engineers and Device Engineers, as well as with senior leadership.
The ideal candidate is a highly motivated, collaborative, and proactive individual who can communicate effectively and can adapt and learn quickly.
Expert at translating ambiguous business problems into technical solutions by working with business partners to design, develop and deploy data science solutions that drive key product decisionsStrong programming skills, including data-querying skills (SQL and/or Spark, etc.) and experience with a scripting language for data processing and development (e.g., Python, R, or Scala).Self-starter, comfortable with ambiguity, and enjoy working in a fast-paced dynamic environmentExperienced in building and maintaining large-scale ETL/ELT pipelines that are optimized for performance and can handle data from various sources, structured or unstructured.Proven expertise in developing data visualizations & reporting. Ex: Tableau, Superset, Qlickview, etcProficiency in data science, machine learning and statistical data analysis