Data Scientist/Software Engineer, Reliability Engineering

Tesla

The Role 

 

The reliability data team is looking for a Data Scientist/Software Engineer to utilize large-scale data and help Tesla engineers design and validate the most compelling and reliable products for our customers. The reliability data team collects real-time life data from test and fleet (energy, charging, and vehicle products) and is responsible for retrieving, analyzing, and summarizing results to cross-functional teams. The team provides support through the whole design cycle by building software tools that orchestrate all the reliability physics analyses. The level of this position will be determined at the time of the interview.

 

Responsibilities

 

  • Build robust, flexible, and automated software tools to enable complex analysis of real-time fleet

  • Apply statistical analysis to test (accelerated life) and field (life) data to inform reliability physics modeling/analyses and associated corrective actions

  • Answer complex questions on fleet usage and behavior to enable proactive monitoring, grow reliability, and minimize field failures

  • Work closely with Reliability and Design engineers to create/interpret/validate numeric models of fielded and in-test products

  • Contribute to the automation and standardization of our data pipelines

  • Build visualizations to effectively communicate results.

 

Requirements

 

  • Bachelor’s degree or higher in quantitative discipline (e.g., Statistics, Computer Science, Mathematics, Physics, Electrical Engineering, Industrial Engineering) or the equivalent in experience and evidence of exceptional ability

  • Advanced knowledge of Python

  • Strong knowledge of data structures, architectures, and languages such as SQL

  • Solid understanding of statistics (Weibull distribution, Maximum Likelihood Estimation, Bayesian methods, Monte Carlo analysis, etc.)

  • General knowledge of physics and engineering principles

  • General Knowledge of data pipelining (Airflow Pipelines, ETL, Py Spark)

  • Experience and interest in data visualization techniques

  • Ability to problem solve and adjust priorities with little advance notice to meet deadlines

  • Strong verbal and written communication skills

 

Nice to have

  • Experience with Big Data ecosystem (Spark, Presto, Data Lake/Warehouse)

  • Experience with quick web application development (e.g., Flask, Streamlit, Dash…)

  • Experience with containerization (e.g., Docker)