Tesla is migrating from programming logic to Neural Net for FSD decision making

E.V. Annex
3 min readJan 14, 2021

Originally published at https://evannex.com on January 14, 2021.

Tesla firmware hacker and researcher who goes by the name of ‘green’ recently revealed that Tesla is migrating towards neural nets (NNs) forFull Self-Driving (FSD)decision making. Currently, Tesla vehicles are making decisions such as ‘right of way’ with programming logic built in C++.

Above: Tesla’s Autopilot is constantly improving (Image: Tesla)

Green tweeted, “Looked into 2020.48 NNs (yay for holiday break free time!) Interesting to see they are migrating right of way guessing from C++ as seen in the early FSD betas in October to NNs now. The quantum leap is being implement bit by bit I guess.”

Green thinks that the Silicon Valley based automaker is implementing this migration bit-by-bit, probably starting with the ‘right of way guessing’ function to other more complex decisions that the car needs to make while driving.

Above: Tesla FSD Beta driving visualizations (Source: What’s Inside Family / YouTube)

At this stage, Tesla is iterating and improving on its FSD Betadecision-making with each firmware update that it’s pushing to a select few cars in the United States. With NNs in the loop, the cars will be able to instantly get feedback from the Tesla Mothership which has a plethora of machine learning data to help make a better decision.

To simplify this, Tesla Autopilot engineer Kate Park explains how computer vision works in a recent video produced for kids. Check out the graphic from the video below that explains how decision making or object detection is limited to the use of traditional programming. With the use of Machine Learning + AI + Neural Nets, the whole process provides limitless possibilities.

Above: Tesla Autopilot engineer explains how computers recognize objects via Traditional Programming vs. Machine Learning + AI (Source: Code.org / YouTube)

For example, traditional programming cannot recognize “X” if it is not exactly drawn with the defined parameters (see image above). But the combination of machine learning, AI, and Neural Networks enable the computer to learn multiple patterns and possibilities an “X” can appear in front.

To make computer vision work perfectly, it requires thousands (sometimes millions) of images to correctly define an object. In Tesla’s case, virtually every car in the fleet sends back video data from all 8 cameras to train the company’s neural net. Tesla has billions of miles of real-world driving data gathered from its worldwide fleet. This gives Tesla a substantial advantage over other companies working on self-driving cars.

Above: How Tesla Neural Net vision works, explained by Tesla AI director, Andrej Karpathy on Tesla Autonomy Day 2019 (Source: Tesla)

Earlier this year, Tesla CEO Elon Musk said that the company’s powerful next-gen neural net (coined Dojo) is being built and its version 1.0 should be up and running next year.

Autopilot driving decisions migrating towards NNs represents a substantial leap forward for Tesla’s FSD efforts. In turn, 2020 could yield some notable advances.

Video

Video: How computer vision works by Tesla Autopilot engineer Kate Park (Source: Code.org / YouTube)

===

Originally published at https://evannex.com on January 14, 2021.

Written by: Iqtidar Ali. An earlier version of this article was originally published on Tesla Oracle.

--

--