Robotics Engineer, Fleet Learning – Autopilot


The Role

As a member of the Autopilot Fleet Learning team, you will
take on a highly cross-functional role to automatically label large amounts of
data from the global Tesla vehicle fleet, train cutting-edge deep neural
networks on these labels, and integrate the models with the rest of the
Autopilot software stack to support numerous state-of-the-art autonomous driving
functionalities and safety-critical features.

We are looking for both generalists with a breadth of
expertise who are excited to work across the entire pipeline, as well as
specialists who can dive deep on specific modules. An ideal candidate will possess
strong expertise in at least one of the following areas.


  • Develop offline state estimation, 3D
    reconstruction, and sensor fusion algorithms to automatically generate
    supervision for deep neural networks.
  • Train deep neural networks with large scale,
    auto-labelled datasets.
  • Design and implement tools, tests, and metrics
    to accelerate the data generation and model development cycles.
  • Integrate the models with the real-time embedded
    C++ software stack.
  • Work with planning & controls team to
    develop control policies on top of network outputs.


  • Minimum 3 years of experience writing
    production-level Python or C++.
  • Strong mathematical fundamentals including
    linear algebra, vector calculus, probability theory, and numerical
  • Familiarity with basic computer vision concepts,
    such as camera intrinsics, extrinsics, projections, and epipolar geometry.
  • Exposure to a major deep learning framework such
    as PyTorch, TensorFlow, Keras, or MXNet.

Preferred Qualifications

  • Experience writing both production-level Python
    (including Numpy and Pytorch) and modern C++.
  • Proven track record of training and deploying
    real world neural networks.
  • Comfortable with general robotics, state
    estimation, filtering.
  • Prior work in Robotics, State estimation, Visual
    Odometry, SLAM, Structure from Motion, 3D Reconstruction.
  • Exposure to recent advances in Differentiable
    Rendering and Neural Rendering Techniques.