On The Future of Tesla and Full Self Driving Cars

Posted on: April 28, 2019
Posted in Strategy

Last week when Tesla launched its dedicated autonomy ASIC, a lot of the focus was on this new hardware which allowed redundancy and incredibly fast neural processing. Undertaking this hardware development and executing on it during Tesla’s production hell is an unprecedented feat for any company, never mind a carmaker. As I wrote in Tesla’s Incredible Platform Advantage:

The key to understanding What Tesla did in releasing what they call a supercomputer is understanding the 2019 version of building and supporting a full custom semiconductor design team. The mere fact that they are already shipping this computer retrofitted to an existing form factor in Model X, S and 3 is astounding. This was not a pre-announce by Tesla. This was an in-production launch.

And the key to understanding Elon Musk is to separate what is hyperbole from fact—he uses both skillfully. The most important truth from watching yesterday’s unveiling is that this chip is not only in full production from fabs, but also shipping already inside cars.

Saying you are going to do something like full autonomy on ‘X’ time-scale is one type of commitment, but saying you built an entirely new rendering engine for the pixels that fly by in the physical world is quite another. While any former projected launch for ‘full autonomy’ is supposition, marketing, and subject to industry definitions, demand and regulations, the latter is something Tesla can — and did — control.

When the curve bends at an exponential rate, improvements often get taken for granted. We are used to that with Moore’s Law. A lot of smart people rightfully see hardware development as mapping to an exponential increase for known reasons, whereas software innovation doesn’t happen like that. Algorithms generally don’t get 2x faster every 2 years, it’s the hardware that allows them to run faster.

However the limitations around self-driving are system wide. They are not related to innovations of a problem on just one axis. The key to system-wide improvement for self-driving is data collection and real-time decisioning on that data – at scale. Data collection is increasing at a linear rate for Tesla, but there is another thing that people don’t see. Improvements to the overall system are improving exponentially because mathematics around post-processing of image data are too.

The part that hasn’t been talked about – even by Tesla – is what they are doing around new mathematical transforms to make the data much more robust. 3D image processing – connecting frames of data and applying transforms in new ways – is an incredibly robust area of research, since image processing touches almost all fields today.

When I studied electrical engineering and image processing I’ll never forget when my college professor trained a transform using a portion of highway 217 near our school and was able to input other images – from completely different angles – and detect portions of the freeway, which to the human eye seemed like they had nothing in common. This form of transform math makes image data much more valuable than it seems superficially and is germane today. it’s how computers learn one breed of dog when given enough images.

The innovations happening in complex transform mathematics to make the matrix manipulations of collected data much more robust are improving at insane speeds today. Specifically for self-driving, they are focused around software algorithms to perform image depth sensing, and they are evolving FAST. This approach on pseudo-LiDAR depth estimation from Cornell just last week is telling. They model depth by creating a pseudo LiDAR point plane of out of raw image data. Their conclusions astounded even them:

The improvements we obtain from this correction are unprecedentedly high and affect all methods alike. With this quantum leap it is plausible that image-based 3D object detection for autonomous vehicle will become a reality in the near future. The implications of such a prospect are enormous. Currently, the LiDAR hardware is arguably the most expensive additional component required for robust autonomous driving. Without it, the additional hardware cost for autonomous driving becomes relatively minor.

The reason Elon says LiDAR is a myth for self-driving is that it’s a hardware level advancement that relies on someone productizing a solid-state device and commercializing it in volume. The truth is that the way the semiconductor supply chain works will keep LiDAR hardware unachievably high cost, and impossible to get to economies where a chip company can justify actually making a profit. LiDAR is low volume and niche in nature. Who else needs LiDAR outside of the car industry? Almost no one.

Think about it. We all know LiDAR is prohibitive in cost today and looks goofy attached to Waymo cars, and is a nightmare to replace in an accident. But say some experimental approach to LiDAR will cost $100 in the future as people say it could. The module is solid-state and you need a few of those per car. Even then the economics simply do not work. Is Toyota going to all of a sudden put this on several models and drive tons of volume? Even if they were to purchase a million of these devices a year, that is a $100M deal for that LiDAR device provider. This may sound like a lot, but it’s not. It would easily cost more than $100M to develop and commercialize such a device. The sad truth of semiconductor technology is that specialized chips need to arrive in massive volume for anyone to make a business out of them. There are a few exceptions to this, but automotive – where literally every dollar matters and people complain about cheap seats – is not one of them.

Meanwhile image sensing is moving at ‘light speed’. It’s being used everywhere. The economies of scale of the visible light boom are truly astounding. Billions of image sensors will be shipped this year. They are in every phone, security camera, ubiquitous in surveillance technology and in 3-4 years – by the time people think LiDAR could be on cars – AR headsets will be pushing hundreds of millions of advanced image processing chips which are capable of much higher resolution.

During this time Tesla will also be upgrading the image sensors on cars to what’s available in the smartphone and camera industries, enabling the complex math for depth modeling to become better at a rapid pace – approximately mapping to Moore’s Law. In the near future, Tesla will likely put 8K image sensors strapped around its cars. These new SKUs will be different than currently shipping Teslas but will run different sets of algorithms that leverage much higher fidelity matrix math.

So… self-driving cars are not a software or a hardware problem in isolation. In addition to having a fully tuned system, Tesla is going to leverage mathematics they can perform on their collected data in new ways. Of course they won’t talk about it. They will just ship new improvements in ‘software’ under the hood, and when an OTA update arrives for a Model 3, your car will suddenly drive better. This has already been well documented.

LiDAR is a flawed approach and a myth. It’s not necessary, and will become obsolete as 3D image processing mathematics push what’s possible with collected data to the human eye’s extreme. And the gap between point-plane information from LiDAR and what’s collected in the visible light domain will be fully closed in the next 3 years. Smart EEs are solving this problem around the globe.

So how does all this affect when Tesla will get to L5 self-driving? It depends. Full autonomy is a slippery slope. People don’t need it at any specific date. It will be slowed by regulation. It will depend on when and where. Elon’s hyperbolic comments are well suited to that. It’s much more likely that isolated deployments in local jurisdictions will allow for small autonomy roll-outs, while a Tesla seems to get you very close as you ride around town. Human over-ride will be required during this transition phase.

The key to moving fast for carmakers is based on making complex trade-offs between backward compatibility and future optionality. And Tesla is the only one who have already demonstrated they can do that masterfully. Tesla is amassing massive amounts of learning from training real world data in shadow mode today. It’s at a scale that makes simulation data obviously weak in comparison. Do you want to ride in a car that has been trained in a simulated environment when there is no steering wheel, or one that learned in the real world?

Let’s be honest: It’s hard to tell whether Tesla will emerge the winner in this market. That’s a complex calculus and the industry they play in today is a massively difficult one to succeed in. There are a few ways of looking at this. One is how can they possibly succeed? But another is how can anyone else succeed too? Others don’t have cars on the road and are relying on some future technology that may or may not see the light of day (solid state LiDAR), and will most certainly be obsolete by the time it does.

In all this the winner is clear: it’s image-based processing and recognition. And this will become abundantly clear long before the race to full autonomy has ended. The growth of the car industry will continue to find peace in its own violence. After all, innovations across the full stack of technology create new beginnings as well as disruptive ends. It’s fitting that the “race to peace” enabled by image processing that was afforded to us by the smartphone wars will indeed make a safer world, regardless of exactly when and how it all arrives.

Tweet or Like this post.