Since lidar has distance information and cameras do not, it was always a ridiculous idea by a certain company to use cameras only. Lidar using cars are going to replace at least the ones that don't make use of this obvious answer to obstacle detection challenges.
Based on that list it boils down to 2 things it seems:
- cost (no longer a problem)
- too much code needed and it bloats the data pipelines. Does anyone have any actual evidence of this being the case? Like yes, code would be needed, but why is that innately a bad thing? Bloated data pipelines feels like another hand-wave when I think if you do it right it’s fine. As proven by Waymo.
Really curious if any Tesla engineers feel like this is still the best way forward or if it’s just a matter of having to listen to the big guy musk.
I’ve always felt that relying on vision only would be a detriment because even humans with good vision get into circumstances where they get hurt because of temporary vision hindrances. Think heavy snow, heavy rain, heavy fog, even just when you crest a hill at a certain time of day and the sun flashes you
Just for the record though, Musk isn't blindly anti-LIDAR. He has said (and I think this is an objective fact) that all existing roads and driving are based on vision (which is what all humans do). So that should technically be sufficient. SpaceX uses LIDAR for their docking systems.
I would argue that yes, we do use vision but we get that "lidar depth" from our stereo vision. And that used to be why I thought cameras weren't enough.
But then look at all the work with gaussian splatting (where you can take multiple 2d samples and build a 3d world out of it). So you could probably get 80% there with just that.
The ethos of many Musk companies (you'll hear this from many engineers that work there) is simplify, simplify, simplify. If something isn't needed, take it out. Question everything that might be needed.
To me, LIDAR is just one of those things in that general pattern of "if it isn't absolutely needed, take it out" – and the fact that FSD works so well without it proves that it isn't required. It's probably a nice to have, but maybe not required.
Humans aren't using only fixed vision for driving. This is such a tiresome thing to see repeated in every discussion about self driving.
You're listening to the road and car sounds around you. You're feeling vibration on the road. You're feeling feedback on the steering wheel. You're using a combination of monocular and binocular depth perception - plus, your eyes are not a fixed focal length "cameras". You're moving your head to change the perspective you see the road at. Your inner ear is telling you about your acceleration and orientation.
And also, even with the suite of sensors that humans have, their vision perception is frequently inadequate and leads to crashes. If vision was good enough, "SMIDSY" wouldn't be such an infamous acronym in vehicle injury cases.
the issue is clearly attention not vision when it comes to humans. if we could actually process 100% of the visual information in our field of view, then accidents would probably go down a shit load.
Humans have both issues. There are many human failures which are distinctly a vision issue and not attention related, e.g. misestimation of depth/speed, obscured or obstructed vision, optical focus issues, insufficient contrast or exposure, etc.
But how many of those crashes not caused by inattention could have been avoided with less idiocy and more defensive driving? I mean, yes, we can’t see as well in fog, but that’s why you should slow down
Again, I'm still not saying that humans don't make bad decisions. I'm saying that, unequivocally, they also get into accidents while paying attention and being careful, as a result of misinterpretation or failure of their senses. These accidents are also common, for example:
* someone parking carefully, misjudges depth perception, bumps an object
* person driving at night, their eyes failed to perceive a poorly lit feature of the road/markings/obstacles
* person driving and suddenly blinded by bright object (the sun, bright lights at night)
* person pulling out in traffic who misinterprets their depth perception and therefore misjudges the speed of approaching traffic
* people can only focus their eyes at one distance at a time, and it takes time to focus at a different distance. It is neither unsafe nor unexpected for humans to check their instruments while driving -- but it can take the human eye hundreds of milliseconds to focus under normal circumstances -- If you look down, focus, look back up, and focus, as quick as you can at highway speeds, you will have travelled quite a long distance.
These type of failures can happen not as a result of poor decision making, but of poor perception.
In theory, a computer should be able to do the same. It could do sensor fusion with even more sense modalities than we have. It could have an array of cameras and potentially out-do our stereo vision, or perhaps even use some lightfield magic to (virtually) analyze the same scene with multiple optical paths.
However, there is also a lot of interaction between our perceptual system and cognition. Just for depth perception, we're doing a lot of temporal analysis. We track moving objects and infer distance from assumptions about scale and object permanence. We don't just repeatedly make depth maps from 2D imagery.
The brute-force approach is something like training visual language models (VLMs). E.g. you could train on lots of movies and be able to predict "what happens next" in the imaging world.
But, compared to LLMs, there is a bigger gap between the model and the application domain with VLMs. It may seem like LLMs are being applied to lots of domains, but most are just tiny variations on the same task of "writing what comes next", which is exactly what they were trained on. Unfortunately, driving is not "painting what comes next" in the same way as all these LLM writing hacks. There is still a big gap between that predictive layer, planning, and executing. Our giant corpus of movies does not really provide the ready-made training data to go after those bigger problems.
Putting your point another way, in order to replicate an average human driver’s competence you would need to make several strong advancements in the state of the art in computer vision _and_ digital optics.
In India (among others), honking is essential to reducing crashes
We often greatly underestimate / undervalue the role of our ears relative to vision. As my film director friend says, 80% of the impact in a movie is in the sound
Sufficient to build something close to human performance. But self driving cars will be held to a much higher standard by society. A standard only achievable by having sensors like LiDAR.
if a self driving car had the exact vision of humans it would still be better because it has better reaction times. never mind the fact that humans cant actually process all the visual information in our field of view because we dont have the broad attention to be able to do that. its very obvious that you can get super human performance with just cameras.
Whether thats worth completely throwing away LiDAR is a different question, but your argument is just obviously false.
This reminds me of the time I was distantly following a Waymo car at speed on 101 in Mountain View during rush hour. The Waymo brake lights came on first followed a second or two later by the rest of the traffic.
Even if they weren’t going to be held to a higher standard for widespread acceptance, tens of thousands of people a year in the us die due to humans driving badly. Why would we not try to do better than that?
Sufficient if all else were equal. But the human brain and artificial neural networks are clearly not equal. This is setting aside the whole question of whether we hope to equal human performance or exceed it.
To do gaussian splatting anywhere near in real time, you need good depth data to initialize the gaussian positions. This can of course come from monocular depth but then you are back to monocular depth vs lidar.
Mentioning gaussian splatting for why we don't need lidar depth is a great example of Musk-esque technobabble; surface level seemingly correct, but nonsense to any practitioner. Because one of the biggest problems of all SfM techniques is that the results are scale ambiguous, so they do not in fact recover that crucial real-world depth measurement you get from lidar.
Now you might say "use a depth model to estimate metric depth" and I think if you spend 5 minutes thinking about why a magic math box that pretends to recover real depth from a single 2D image is a very very sketchy proposition when you need it to be correct for emergency braking versus some TikTok bokeh filter you will see that also doesn't get you far.
This is not really true if you have multiple cameras with a known baseline, or well known motion characteristics like you get with an accelerometer+ wheel speed.
> and the fact that FSD works so well without it proves that it isn't required
The reports that Tesla submits on Austin Robotaxis include several of them hitting fixed objects. This is the same behavior that has been reported on for prior versions of their software of Teslas not seeing objects, including for the incident for which they had a $250M verdict against them reaffirmed this past week. That this is occurring in an extensively mapped environment and with a safety driver on board leads me to the opposite conclusion that you have reached.
My understanding is that there's more data processing required with cameras because you need to estimate distance from stereoscopic vision. And as it happens, the required chips for that have shot up in price because of the AI boom.
But I think costs were just part of the reason why Elon decided against Lidar. Apparently, they interfere with each other once the market saturates and you have many such cars on the same streets at the same time. Haven't heard yet how the Lidar proponents are planning to address that.
Lidar critics like to pretend that anti-collision is not a well-studied branch of Computer Science and telecoms. Wifi, Ethernet and cellphones all work well simultaneously, despite participants all sharing the same physical medium.
The points linked repeatedly focus on cost and complexity as justification, even explicitly stating musks desire to minimise
components in Kaparthy’s list.
They don’t focus on safety or effectiveness except to say that vision should be ‘sufficient’. Which is damning with faint praise imho.
If that link was to try and argue that the removal of sensors makes perfect sense i have to point out that anyone that reads that would likely have their negative viewpoint hardened. It was done to reduce cost (back when the sensors were 1000’s) and out of a ridiculous desire by Musk for minimalism. It’s the same desire that removed the indicator stalk i might add.
Instead of betting on RADAR and LIDAR HW getting better and cost going down, they went with vision only approach. Everybody in this field knows the strengths and weakness of each system. Multi-modal sensor fusion is the way to go for L4 autonomy. There is no other way to reduce the risk. Vision only will never be able to achieve L4 in all the weather conditions. Tesla may try to demonstrate L4 in limited geography and in good weather conditions but it won't scale.
The reasoning is cynical but sound. If the system uses only the sensing modes people have, it will make the mistakes people do. If a jury thinks "well I could have done that either!" You win. It doesn't matter if your system has fewer accidents if some of the failure modes are different than human ones, because the jury will think "how could it not figure that out?"
The reasoning was simply that LIDAR was (and incorrectly predicted to always be) significantly more expensive than cameras, and hypothetically that should be fine because, well, humans drive with only two eyes.
Musk miscalculated on 1) cost reduction in LIDAR and 2) how incredible the human brain is compared to computers.
Having similar sensors certainly doesn't guarantee your accidents look the same, so I don't think your logic is even internally sound.
Sensor fusion is also hard to get right, since you still need cameras you have to fuse the two information streams. Thats mainly a software problem and companies like Waymo have done it, but Tesla was having trouble with it earlier, and if you don’t do it right, your self driving system can be less reliable.
Sensor fusion seems like it'd be a big problem when you're handcoding lots of C++, and way less of a problem when all the sensors are just feeding into one big neural network, as Tesla and probably others are doing now. The training process takes care of it from there.
One of Udacity's first courses was on self-driving, taught by Sebastian Thrun who later cofounded Waymo. He went through some Bayesian math that takes a collection of lidar points, where each point contributes to a probabilistic assessment of what's really going on. It's fine if different points seem to contradict each other, because you're looking for the most likely scenario that could produce that combined sensor data. Transformers can do the same sort of thing, and even with different sensor types it's still the same sort of problem.
I think this is the key. In theory - more information stream when fused together (properly) should reduce error. If their stumbling block is the "properly" part, than the rest of those justifications come off as a pretty weak way to sidestep their own inabilities to deliver this properly.
We have lots of evidence of similar strategies being used in other domains, this seems like an especially life-critical domain that ought to have high rigor and standards applied.
> how incredible the human brain is compared to computers.
It is pretty incredible but people will (rightly so?) hold automated drivers to an ultra high standard. If automated driving systems cause accidents at anywhere near the human rate, it'll be outlawed pretty quickly.
According to that article, Waymo crashes 2.3x more often than human drivers (every 98k miles vs 229k miles), which is clearly false. I think it's far more likely that humans don't report most minor collisions to insurance, and that both Robotaxis and Waymo are safer than human drivers on average.
> According to that article, Waymo crashes 2.3x more often than human drivers (every 98k miles vs 229k miles), which is clearly false.
Why is it clearly false? It might be false, but clearly? I would definitely like to see evidence either way.
> I think it's far more likely that humans don't report most minor collisions to insurance, and that both Robotaxis and Waymo are safer than human drivers on average.
That sounds like you are trying to find reasons to get the conclusion you want.
The NHTSA requires a report when any automated driving system hits any object at any speed, or if anything else hits the ADS vehicle resulting damage that is reasonably expected to exceed $1,000.[1] In practice, this means that everyone reports any ADS collision, since trading paint between two vehicles can result in >$1k in damage total.
If you go to the NHTSA's page regarding their Standing General Order[2] and download the CSV of all ADS incidents[3], you can filter where the reporting entity is Waymo and find 520 rows. If you filter where the vehicle was stopped or parked, you'll find 318 crashes. If you scan through the narrative column, you'll see things like a Waymo yielding to pedestrians in a crosswalk and getting rear-ended, or waiting for a red light to change and getting rear-ended, or yielding to a pickup truck that then shifted into reverse and backed into the Waymo. In other words: the majority of Waymo collisions are due to human drivers.
So either Waymos are ridiculously unlucky, or when these sorts of things happen between two human driven cars, it's rarely reported to insurance. In my experience, if there's only minor damage, both parties exchange contact info and don't involve the authorities. Maybe one compensates the other for damage, or maybe neither party cares enough about a minor dent or scrape to deal with it. I've done this when someone rear-ended me, and I know my parents have done it when they've had collisions.
If human driven vehicles really did average 229k miles between any collision of any kind, we'd see many more pristine older vehicles. But if you pay attention to other cars on the road or in parking lots, you'll see far more dents and scratches than would be expected from that statistic. And that's not even counting the damage that gets repaired!
Definitely. I looked at Tesla's source for these numbers, looks like they primarily used data sourced from police reports, which most people only file if the incident is serious enough to turn into insurance.
Tesla notes:
> These assumptions may contain limitations with respect to reporting criteria, unreported incident estimations (e.g., NHTSA estimates that 60% of property damage-only crashes and 32% of injury crashes are not reported to police
> Musk miscalculated on 1) cost reduction in LIDAR
Given that Musk has a history of driving lower costs, it's unlikely he overestimated the long-term cost floor. He just thought we were close to self-driving in 2014.
Another factor is Andrej Karpathy, who was the primary architect for the vision-only approach. Musk wanted fewer parts, and Karpathy believed he could deliver that. Karpathy is still an advocate of vision-only.
> Musk miscalculated on 1) cost reduction in LIDAR and 2) how incredible the human brain is compared to computers.
And, less excusable, ignorant of how incredible human eyes are compared to small sensor cameras. In particular high DR in low light, with fast motion. Every photographer knows this.
There are good arguments but this isn’t one. Many humans (like me!) drive fine without binocular vision. And the cars have many cameras all around, with wide angle lenses that are watching everything all the time, when a human can only focus in one direction at a time.
I thought only the front view has binocular vision on the cars. The others are single, with no depth perception. How does it know how close objects are outside this forward cone?
Eh, I think ‘miscalculation’ might be giving too much credit about good intentions.
He wanted (needed?) to get on the hype train for self driving to pump up the stock price, knew that at the time there was zero chance they could sell it at the price point lidar required at the time - or even effective other sensors (like radar) - and sold it anyway at the price point that people would buy it at, even though it was not plausibly going to ever work at the level that was being promised.
There is a word for that. But I’m sure there are many lawyers that will say it was ‘mere fluffery’ or the like. And I’m sure he’ll get away with it, because more than enough people are complicit in the mess.
Miscalculation assumes there was a mistake somewhere, but near as I can tell, it is playing out as any reasonable person expected it too, given what was known at the time.
I think Musk is really not as smart as he thinks he is and this specific thing was probably an earnest mistake. Lots of other fraudulent stuff going on though of course!
IMHO not using lidars sounds like a premature optimisation and a complication, with a level of hubris.
This is a difficult problem to solve and perhaps a pragmatic approach was/is to make your life as simple as possible to help get to a fully working solution, even if more expensive, then you can improve cost and optimise.
Considering he also runs a company that puts computer chips inside brains to augment them you’d think he ought to have a more sound understanding as to the limits of both.
If the data were positive for Tesla, Tesla would publish it
They do not, so one can infer it is not flattering
(Before you post the "Miles driven with FSD" chart, you should know upfront (as Tesla must) that chart doesn't normalize by age of vehicle or driving conditions and is therefore meaningless/presumably designed to deceive)
Until a lawyer points out other cars see that. My car already has various sensors and in manual driving sounds alarms if there is a danger I seem not to have noticed. (There are false alarms - but most of the type I did notice and probably should have left more safety margin even though I wouldn't hit it)
also regulators gather srastics and if cars with something do better they will mandate it.
Very recent issue with Waymo https://dmnews.co.uk/waymo-robotaxi-spotted-unable-to-cross-.... This is 17 years after they bet the farm on LIDAR, with no signs its ever going to be cost effective or that it's better than multiple cameras, with millisecond reaction 360 degrees, that never gets tired, drunk, distracted, and also has other cheaper sensors and NN trained on Billions or real world data.
A feature that is bulletproof in other cars with a very boring and industry standard sensor (it's not even expensive), while Tesla insisted they could do it with just normal cameras.
Ok so Waymo is useless in the rain then, kind of limiting. But at least that 0.000000000001% times it actually is a sinkhole you won't damage the bumper.
Autopilot isn’t full self driving (FSD), most cars these ship with smart cruise control (what autopilot basically is). Do you have fatality statistics for FSD?
If we are just talking about smart cruise control, most cars are using cameras and radar, not lidar yet. But Tesla is special since it doesn’t even use radar for its smart cruise control implementation, so that could make it less safe than other new cars with smart cruise control, but Autopilot was never competing with Waymo.
Dude that's not a 'puddle' as the article claims, that's a body of water that it's not even visually obvious whether it's safe to drive through. Maybe I'm a bad driver but I'd hesitate to drive through that in a small car either.
If you drive the road every day, you probably do. If you can see someone drive through it (perhaps someone who knows the area well and knows how deep it is based on puddle width), you definitely do.
It is sound to think that cameras plus an accelerometer, plus data about about the car and environment (that you get from your ears) ought to be able to mimic and improve on human driving. However humans general purpose spatial awareness and ability to integrate all kinds of general information is probably really hard to replicate. A human would realize that an orange fluid spilling across the road might be slippery, guess the way a person might travel from the way their eyes are pointing...
It may just be faster to make lidar cheap. And lidar can do things humans can't.
IIUC, the cameras in a Tesla have worse vision (resolution) at far distances than a human. So while in the abstract your argument sounds fine; it'll crumble in court when a lawyer points out a similar driver would've needed corrective lens.
This is a new and flawed rationale that I haven't heard before. Tesla cameras are worse (lower resolution, sensitivity, and dynamic range) than human eyes and don't have "ears" (microphones).
Most accidents happen because people are human, aren't paying attention, are inebriated, not experienced enough drivers, or reckless.
It's not fair to say that vision based models will "make the same mistakes people do" as >99% of the mistakes people make are avoidable if these issues were addressed. And a computer can easily address all those issues
As I understand, lidars don't work well in rain/snow/fog. So in the real world, where you have limited resources (research and production investment, people talent, AI training time and dataset breadth, power consumption) that you could redistribute between two systems (vision and lidar), but one of the systems would contradict the other in dangerous driving conditions — it's smarter to just max out vision and ignore lidar altogether.
When it's not safe to drive, it's not safe to drive.
I've been in zero-road-speed whiteout conditions several times. The only move to make is to the side of the road without getting stuck, and turning on your flashers.
Low-light cameras would not have worked. Sonar would not have worked. Infrared would not have worked.
I think the weather where cameras/sensors start having problems is much better than zero-vis whiteout.
If we could make sensors that lets an autonomous vehicle drive reliably in any snow/rain where a human could drive (although carefully) then we're good. But we are a long way from that. Especially since a lot of sensor tech like cameras tend to fail in 2 ways, both through their performance being worse in adverse condition but also simply failing to function at all if they are covered in ice/snow/water.
If you have multi-return lidar, you can see through certain occlusions. If the fog/rain isn't that bad, you can filter for the last return and get the hard surface behind the occlusion. The bigger problem with rain is that you get specular reflection and your laser light just flies off into space instead of coming back to you. Lidar not work good on shiney.
No, it isn't "smarter." Camera-only driving is the product of a stubborn dogmatic boss who can't admit a fundamental error. "Just make it work" is a terrible approach to engineering.
Criticism of Musk isn't hate of Musk. The point is completely valid and the results of this management style infuses all of his businesses albeit with differing results.
It's significant that a truly hard problem like autonomous driving doesn't respond to a "brute force" management style. Rockets aren't in this category because the required knowledge and theory is fairly complete, whereas real autonomous driving is completely novel.
Hmm. Is it ragebaiting to respond to a tired and wrong statement by saying that it's tired and wrong and that the situation is merely the product of piss poor management decisions? People get understandably frustrated seeing the same wrong talking point that people with domain knowledge in computer vision and robotics have repeatedly explained is wrong in extremely fundamental ways.
> I don't own a Tesla.
n.b. The shoe/foot comment was not about you. It was about Musk. It wouldn't make any idiomatic sense for the expression to be about you given what you said and what you were responding to. If they'd said "pot, meet kettle", then it would have been about you. In that context, saying that you don't own a Tesla feels like a weird thing for you to insert in your comment. It potentially comes across as suspiciously defensive.
Why does this matter? You have to slow down in rain/snow/fog anyway, so only having cameras available doesn't hurt you all that much. But then in clear weather lidar can only help.
If your vision is good enough to drive in rain/snow/fog, you don't need lidar in clear conditions. If you planned to spend $10B on vision and $10B on lidar — you would be better off spending $20B on better vision.
It still infuriates me that Tesla went so long being able to call their feature “auto pilot.“ Then they had the audacity to call it user error when people thought the car would automatically pilot itself.
> If yo[u can] drive in rain/snow/fog, you don't need lidar in clear conditions
Of course you do, you're driving at much higher speeds and so is the surrounding traffic. You can't just guess what you might be looking at, you have to make clear decisions promptly. Lidar is excellent in that case.
People who don't understand that sensor fusion is an entire field of study with tons of existing work and lots of expertise have been fooled by a fake argument of "If the camera and lidar disagree, what do you do?"
It's frustrating to still see it repeated over a decade later. It was always bullshit. It was always a lie.
Do cameras work well in those conditions? Nope. Also cameras don't work well with certain answer of glare, so as a consumer I'd rather have something over-engineered for my safety to cover all edge cases...
Also, military sensor use shows the best answer is to have as many different types of sensors as possible and then do sensor fusion. So machine vision, lidar, radar, etc.
That way you pick up things that are missed by one or more sensor types, catches problems and errors from any of them, and end up with the most accurate ‘view’ of the world - even better than a normal human would.
It’s what Waymo is doing, and they also unsurprisingly, have the best self driving right now.
This is silly. Cameras are cheap. Have both. Sensors that sense differently in different conditions is not an exotic new problem. The kalman filter has existed for about a billion years and machine learning filters do an even better job.
1) it's not cheap to produce lidars at a stable predictable quality in millions;
2) car driving training data sets for lidars are much scarcer (and will always be much scarcer due to cameras' higher prevalence) and at a much lower quality;
3) combined camera+lidar data sets are even scarcer.
1. Automotive LiDAR is down to $350 in China already. BYD is starting to put LiDAR in even entry level cars. (It's been in their mid and high end cars for a while).
2+3. BYD collects extensive training data from customers, much like Tesla does. They will have no trouble with training.
> Since lidar has distance information and cameras do not, it was always a ridiculous idea by a certain company to use cameras only
Human eyes do not have distance information, either, but derive it well enough from spatial (by ‘comparing’ inputs from 2 eyes) or temporal parallax (by ‘comparing’ inputs from one eye at different points in time) to drive cars.
One can also argue that detecting absolute distance isn’t necessary to drive a car. Time to-contact may be more useful. Even only detecting “change in bearing” can be sufficient to avoid collision (https://eoceanic.com/sailing/tips/27/179/how_to_tell_if_you_...)
Having said that, LiDAR works better than vision in mild fog, and if it’s possible to add a decent absolute distance sensor for little extra cost, why wouldn’t you?
Human/animal vision uses way more than parallax to judge distances and bearings - it uses a world model that evolved over millions of years to model the environment. That's why we can get excellent 3D images from a 2D screen, and also why our depth perception can be easily tricked with objects of unexpected size. Put a human or animal in an abstract environment with no shadows and no familiar objects, and you'll see that depth perception based solely on parallax is actually very bad.
Human eyes are much better than cameras at dealing with dynamic range. They’re also attached to a super-computer which has been continuously trained for many years to determine distances and classify objects.
I don’t like the comparison between humans and humans. Humans don’t travel around at 100mph in packs of other humans. Why not use every sensor type at our disposal if it gives us more info to make decisions? Yes I understand it’s more complicated, but we figure stuff out.
It's not that simple. Cameras don't report 3D depth, but these AI models can and do pick up on pictorial depth cues. LiDAR is incredibly valuable for collecting training and validation data, but may also make only an insignificant difference in production inference.
Yea, even in the case they could match human level stereo depth perception with AI, why would they say "no" to superhuman lidar capabilities. Cost could be a somewhat acceptable answer if there wouldn't be problems with the camera only approach but there are still examples of silly failures of it.
And if I remember correctly they also removed their other superhuman radar in their newer models, the one which in certain conditions was capable of sensing multiple cars ahead by bouncing the signal below other cars.
I'm not an expert on ML vision, but I do have a Tesla and it seems to be able to tell how far away things are just fine. I'm not sure what would be wrong with the vision system that lidar needs to fix.
The phantom braking issue with auto pilot tells me it can’t. A shadow from a tree doesn’t trigger your brakes locking up at 70+ mph when there’s a lidar sensor to tell you it’s not a physical object.
“Just buy FSD” isn’t a reasonable answer to a problem literally no other automaker suffers from.
Stopped using autopilot because of the phantom braking.
It's also recently gotten much worse at lane departure sensing, often confused by snow or slightly faded road markers. Not pleasant to have the alarms go off while calmly and safely driving.
Luckily everyone else in the comments is an expert. And also doesn't recognize that Tesla's already drive themselves and did not need Lidar. They also mischaracterize the reasoning.
> I'm not sure what would be wrong with the vision system that lidar needs to fix.
This conversational disconnect is as old as the hills:
1. Person 1 asks "what's wrong" (if it ain't broke don't fix it)
2. Person 2 wants to make something better
My meta-goal here on HN (and many places where people converse) is for people to step back and recognize the conversational context and not fall into the predictable patterns that prevent us from making sense of the world as best as we can.
Yeah it's BS. Tesla uses lidar where it makes sense: They have a small lidar fleet to collect ground truth depth data for better vision estimation. This part is long solved.
I have a suspicion here on HN. When criticizing big tech, especially Google and FB, at a certain time of the day a specific cohort comes online and downvotes. Suspiciously, that is a time when one could conclude, that now people in the US start working or come online. Either fanboys, employees or an organized group of users trying to silence big tech criticism.
I have no proof of course and it might be coincidence, or just difference of mindset between US citizens and Europe citizens. It happened a few times already and to me looks sus.
But if they actually read and not just ctrl+f <company name>, then of course not writing the company name, but hinting at it in an obvious way is no more helpful either.
It's been my experience that hn and reddit have a very high overlap in audience these days. The jerrybreakseverything crowd. Anything anti-tesla, anti-grok, is applauded.
considering cameras can create reliable enough distance measurements AND also handle all the color reception needed for legally driving roads it was always a ridiculous idea by a certain set of people that lidar is necessary.
No, cameras cannot create reliable distance measurements in real-world conditions. Parallax is not a great way to measure distance for fast, unpredictably moving objects (such as cars on the road). And dirt or misalignment can significantly reduce accuracy compared to lab conditions.
Note that humans do not rely strictly on our eyes as cameras to measure distances. There is a huge amount of inference about the world based on our internal world models that goes into vision. For example, if you put is in a false-perspective or otherwise highly artifical environment, our visual acuity goes down significantly; conversely, people with a single eye (so no parallax-based measurement ability) still have quite decent depth perception compared to what you'd naively expect. Not to mention, our eyes are kept very clean, and maintain their alignment to a very high degree of precision.
I don't think they meant literally cameras only can create reliable distance measures. At the risk of putting words in their mouth, I would guess they meant "cameras as the only input to a distance model". the "model" doing all the heavy lifting, covering the points that you quite rightly point out are needed
Several companies, most notably Tesla, have done this well enough to drive in all manner of traffic. I'm not going to comment about if lidar is strictly needed or not to achieve better-than-human safety, that's yet to be proven one way or another by anyone. The point is that cameras + local inference can do a pretty good job at distance estimation
Stereo cameras are useless against repeating patterns. They easily match neighboring copies. And there are lots of repeating or repeating-like patterns that computers aren't smart enough to handle.
You can solve this by adding an emitter next to the camera that does something useful, be it just beaconing lights or noise patterns or phase synced laser pulses. And those "active cameras" are what everyone call LIDARs.
There are tons of evidence showing that cameras are alone are not safe enough and even Tesla has realized that removing lidar to save cost was a mistake.
> ridiculous idea by a certain set of people that lidar is necessary.
"Necessary"? Seems like a straw man, don't you think? I strive to argue against the strongest reasonable claim someone is making.
Lots of reasonable people suggest LIDAR is helpful to fill in gaps when vision is compromised, degraded, or less capable.
People running businesses, of course, will make economic trade-offs. That's fine. But don't confuse, say, Elon's economic tradeoff with the full explanation of reality which must include an awareness that different sensors have different strengths in different contexts.
So, when one thinks about what sensor mix is best for a given application, one would be wise to ask (and answer) such questions as:
- What is the quality bar?
- What sensors are available?
- Wow well do various combinations of sensors work across the range of conditions that matter for the quality bar?
- WRT "quality bar": who gets to decide "what matters"? The company making the cars? The people that drive them? regulators that care about public safety. The answer: it is a complex combination.
It is time to dismiss any claim (or implication) that "technology good, regulation bad". That might be the dumbest excuse for a philosophy I've ever heard. It is the modern-day analogue of "Brawndo's got what plants crave." Smart people won't make this argument outright, but unfortunately, their claims sometimes reduce to this level of absurdity. Neither innovation nor regulation are inherently good nor bad. There are deeper principles in play.
Yes, some individuals would use their self-proclaimed freedom to e.g. drive without seatbelts at 100 mph at night with headlights off. An extreme example, but it is the logical extension of pure individualism run amok. Regulators and anyone who cares about public safety will draw a line somewhere and say "No. Individual stupidity has a limit." Even those same people would eventually come to their senses after they kill someone, but by then it is too late.
WTF was their calculus on the break-even liability point? The "if we do this, we save X amount of money, but stand to lose Y in lawsuits for cases where the usage of LIDAR could have otherwise prevented it."
I find it comical that people continue to go back to this rage well against "a certain company" for their vision-only approach when the truth is they have the best automatic driving system an individual can buy, rivaling Waymo and beating the Chinese brands.
Why are the commenters not pissed at the dozens of other car companies who have done absolutely nothing in this space? Answer: because it's not nearly as fun to be pissed at Kia or Mercedes or whoever. Clearly they are just enjoying the shared anger, regardless of whether it is justified.
Because other car companies don't have CEOs who've been super confident about predicting actual full self driving either "this year" or "next year" for the past decade. If Ford had been swearing up and down they'd have full self driving cracked any day now for ten years, and been charging people for the hardware along the way, everyone would be pissed at them too.
Surely you already know this, so why pretend otherwise?
1. Tesla is not competitive with Waymo, they're not even in the same class. Waymo is 10 years ahead at least. I understand you can't buy a Waymo, but still.
2. Other car companies are properly valued, Tesla is overinflated.
3. Other cars, even basic Hondas, have the same level of self driving as Teslas.
4. Other car companies don't lie to their customers about their capabilities or what they're buying.
> Other cars, even basic Hondas, have the same level of self driving as Teslas.
This is not true at all. Don't confuse lane assist with self driving. And yes I'm aware people are upset by the "Autopilot" product name they chose for lane assist.
There is certainly some truth that "some company" overpromised and underdelivered. They advertise "full self driving" but then hide in the fine-print that "oh jk, not really, but its still full self driving if anyone asks ;) ;) ;)"
I think the frustration stems from the obvious falsehoods in the advertising, and the doubling-down on the tech, despite the well-documented weaknesses of the implementation.
Please be courteous to other drivers on the road, we all share it. Just make sure you’re the one in charge, not the software.
This isn’t to put your argument down, but to offer the perspective of people involved in accidents. Loss of life is bad, but surviving accidents is also equally bad.
Certain company has 300k subscribers that rely on that ridiculous service.
My father lost vision in 1 eye and 50% in other one something like 20 years ago. He struggles in parking but otherwise doing ok without lidar. Turns out motion vision is more accurate after 10-20 meters than stereoscopic vision.
I wouldn’t take too much issue with the “cameras are enough” claim if cameras actually performed like eyes. Human eyes have high dynamic range and continuous autofocus performance that no camera can match. They also have lids with eyelashes that can dynamically block light and assist with aperture adjustment.
The appeal to human biology and argument against fusion between disparate sensors kinda falls flat when you’re building a world model by fusing feeds from cameras all around the car. Humans don’t have 8 eyes in a 360 array around their head. What they do have is two eyes (super cameras) on ~180 degree swiveling and ~180 degree tilting gimbal. With mics attached that help sense other vehicles via road noise. And equilibrioception, vibration detection, and more all in the same system, all fused. If someone were actually building this system to drive the car, the argument based on “how did you drive here today?” gets a lot stronger. One time I had some water blocking my ear and I drove myself to the hospital to get it fixed. That was a shockingly scary drive — your hearing is doing a lot of sensing while driving that you don’t value until it’s gone.
One camera can't really produce depth/distance information, but two cameras sure can. The eyes in your head don't capture distance information individually, but with two eyes you can infer distance.
I'll preface by saying lidar should be used with autonomous vehicles.
Individual cameras don't have distance information, but you can easily calibrate a system of cameras to give you distance information. Your eyes do this already, albeit not quantitatively. The quantitative part comes from math our brains aren't setup to do in real time.
Why make things more complicated than they need to be? Humans don't have lidar and we are the only intelligence that can reliably drive. Lidar just seems like feature engineering, which has proven to be a dead end in most other AI applications (bitter lesson).
Then you deeply underestimate how difficult the problem is, and deeply misunderstand where all the effort has been spent in developing autonomous vehicles.
If all the effort has been spent in trying to replicate the human brain then I am comfortable saying that is a mistake.
We have a tool that can tell with great accuracy how far away an object is. The suggestion that we should ignore it and rely on cameras that have to guess it because “that’s how humans work” is absurd, frankly.
If you had to choose between picking someone up at the airport or dragging a slice of pizza twice your size down the NYC subway stairs, what would YOU do?
The bitter lesson I think is a great way of explaining the logic behind Tesla's strategy. People aren't getting it.
Whether or not it'll actually work remains to be seen, but it's a perfectly reasonable strategy. One counterargument would be that the bitter lesson can be applied to LIDAR too; you don't have to use that data for feature engineering just because it seems well suited for it.
Humans can drive with eyes only, but we are better drivers when we can also use other senses like hearing. If humans has lidar we would use it when driving.
This knee-jerk reply is old and tired, and the counterarguments are well-trod at this point. Even if cameras-only can build a car that’s as good as humans, why should we settle for “as good as“ humans, who cause 40,000 fatalities a year in the US? If we can do better than humans with more advanced sensors, we are practically morally obligated to do that.
Yes! The smart and nuanced panoply of replies to the GP are a wonderful counterbalance to people "just saying things that pop into their head" -- which is unfortunately how I view a lot of human speech nowadays :/
I worked in mobile robotics for a defense contractor in the early-mid 2000s and we had a homegrown Lidar (though we always called it ladar) that was large, heavy, and cost $250k to make. I remember a year out of college driving 2 hours to a military base with it in the backseat of my car and being paranoid at every bump I hit. These things seem like a dream.
Before y'all say that now everyone will be able to get Waymo's sensor suite for hundreds of dollars instead of tens of thousands, that's the easy part.
Waymo benefits from Google's unparalleled geospatial data. Waymo also has a support architecture that doesn't depend on real time remote operation, which can't be implemented reliably in almost all cases. You can't be following your supposedly unsupervised cars with a supervisor in a chase car. You can't even be driving remotely. Your driver software has to be able to drive independently in all cases, even those where it needs to ask a human how to proceed.
The difference between level two and level three driver assist and level four autonomy is like the difference between suborbital flight and putting a payload in orbit. What looks like a next logical step actually takes 10X or more effort, scale, and testing.
I’m not disagreeing with what you’re saying, but does Alphabet actually intend Waymo to be a trillion dollar retail car business itself, selling cars to everyone? Or would they be happy to sell all those super cool things to OEMs? In a world where “everyone” can make a car affordable that can run Waymo’s software, they may be happy to license all that to “everyone” and simply collect fat royalty checks, à la Microsoft in the 90s, allowing them to make a ton of incremental money without all the capex of making their own cars.
They'll probably operate some services and also license their tech to carmakers to sell to consumers. I'm sure there'll be a subscription involved for that too.
They are not the same. I don't think Tesla or its consumers are interested in geofenced self driving, they want to be able to use it on road trips and driving around suburbs.
That's true, Waymo has true Level 4 Automation, and Tesla Customers delude themselves about the Capabilities of their Level 2 System and endanger others for some clout videos
I mean, Tesla gave up on quality self-driving many years ago when Elon went hard against LIDAR. He's never relented, either, and I don't foresee that changing.
That is one plausible outcome. Waymo is experimenting with partnerships with ride hailing apps on the one hand, and building their software into Toyotas on the other hand. So far they have built a few thousand vehicles in a factory run by Magna, which specializes in low volume vehicles. Hyundai wants to sell Waymo tens of thousands of vehicles. That's going to look different in fundamental ways.
It would be smarter to take that approach. Google's core competency is technology, technical infrastructure, and research. More mundane things like manufacturing and customer service are... shall we say, less of a core competency. Take the high value add, leave other things to automakers to duke it out. Also good for avoiding attracting even more regulatory attention.
Good question, and for many it will not be, and rentals are acceptable.
But also for many, renting a car has a huge ICK factor. It is one thing while traveling to rent from an agency who has (purportedly) thoroughly cleaned and inspected the car before you get it. It would be quite another to rent cars like scooters, where the previous user likely smoked, left wrappers and food debris, and who knows what else, even damage. Plus, most people who own cars keep a fair amount of stuff in the car for their specific convenience, and have their own settings, etc.
The fact that the likes of Zipcar, Turo, and the lot have not entirely taken over urban transport but instead remain niche players shows the extent of this preference.
For suburban and rural markets, it just gets more extreme. How quickly could a rental service be able to deliver a car; could it reliably do it in less than 5-10 minutes for people to run an errand? If not, unless they are insanely cheap, ppl will likely want to own their own. Perhaps it'll be more of a hybrid, households owning one car and renting the spare for specific trips?
If you use them regularly, renting is both a pain in the ass and quite costly. If you have atypical security (or even normal, in many cases) or usage patterns, it’s even worse.
A lot of folks are relearning lessons on this front in Cloud right now.
Not a good analogy: a server is not a personal space occupied by humans. It's for the same reason people don't want to hot-desk; they prefer a personal space with their own stuff in it.
With the price declines in ev we are talking about 1 million ev even with all the waymo tech for $50 billion soon. approximate Annual Revenue of a private hire car is $50+k ie $50-60 billion a year for a million cars. But total taxi driver population is 350-400k in the US. I think people are underestimating the electric tech + ai/automation to hit soon.
> but does Alphabet actually intend Waymo to be a trillion dollar retail car business itself, selling cars to everyone?
Google doesn't do retail other than Chromecast and Pixel phones, and that is already annoying to them as it is because it involves something Google is notoriously bad at - actual customer support.
Starting up a car brand is orders of magnitude worse.
For one, people actually need to trust your brand to survive for at least five to ten years - cars are an investment, and a car that I can't trust to get safety-relevant spare parts (brake rotors, brake pads, axle bearings) all of a sudden is essentially an oversized paperweight. For a company such as Google, this alone (remember Killed By Google) is a huge obstacle to overcome.
Then, you need production. Sure, you can go to Magna or other contract manufacturers, or have an established large brand build vehicles for you, or you say you have to go the Tesla route and build everything from scratch. Either way has associated pros and cons.
And then, you need a nationwide network of spare parts, dealerships, repair shops and technicians that can fix the issues that people will get alone because the wide masses abuse cars in ways you might not even dare think about while testing, or because other people run into your cars and so your cars need repairs.
Even being a derivative of an established car brand can be a royal PITA. Let's take Mercedes Benz as an example with the 2003-2009 Mercedes-Benz SLR McLaren. On paper, it's a Mercedes vehicle, with a lot of the parts actually originating from stock Mercedes cars - but most dealerships will refuse to work on it. Either because they lack the support to even properly jack the car up, or because they lack the specialized tools for the AMG engine, or because they cannot even order the parts as Mercedes gates repairs for that thing to special shops. Or, again Mercedes, with Maybach luxury cars. The situation isn't as bad as with the McLaren, but their cars are challenging in another way - the S 650 Pullman weighs around 3 metric tons empty and is 6.50 meters long. Good luck finding a jack even capable of lifting that beast, most Mercedes sports-car shops don't carry jacks that are normally used to lift Mercedes Vito transporters!
Even Tesla, and they've been at it for the better part of two decades, still struggles with that. Their shitty spare parts logistics actually drive up not just insurance prices for their own customers, but for everyone - hit a Tesla with your Dodge and be at fault, and now your insurance has to pay out for months of a rental car because Tesla can't be arsed to provide the body shop the Tesla ends up at with spare parts in any reasonable time.
Established car brands however have all of that ironed out for many, many decades now. American, Asian, European, doesn't matter. And the spare parts don't even have to be made for cars: ask your local Volkswagen dealer to order a few pieces of "199 398 500 A" and one piece of "199 398 500 B" and you'll probably have a lead time of less than a day, at least in Germany - for the uninitiated: that part number belongs to the famous sausage, the second one to the accompanying curry ketchup, with more sausages being sold each year than actual cars.
And established car brands also bring something to the table: their own experiences with integrating smart technology. Yes, particularly German carmakers are notoriously bad in that regard, but for example Mercedes Benz was the first car brand in the world to get a certified Level 3 system on the road [1] and are now working on a Level 4 certification [2]. That kind of experience in navigating bureaucracy, integration and testing cannot be paid for in money.
tl;dr: I see no way in which Waymo goes to general availability regarding selling cars. They will run their own autonomous car fleets in select markets where they can fully control everything, but seeing Waymo tech generally available will be as part of established car brands.
> For one, people actually need to trust your brand to survive for at least five to ten years - cars are an investment, and a car that I can't trust to get safety-relevant spare parts (brake rotors, brake pads, axle bearings) all of a sudden is essentially an oversized paperweight.
Those bits should be easy, unless the OEM was tragically stupid. Where you'll get into trouble is when you need replacement computer bits; those are often tricky for mainstream brands, but if your niche brand ECUs all fail around the same time (wouldn't be the first time for a Google product), and the OEM isn't around to make new ones or make it right, off to the junkyard with all of them. If it's just normal failure rates, you can probably scavenge from totaled vehicles at junkyards even after new parts become unobtainium.
OEM style lighting will also probably get hard to find. Ideally a niche maker would lean towards standard parts there, but that's not the fashion of the times.
> Those bits should be easy, unless the OEM was tragically stupid.
Well... just look at Tesla. A lot of their parts don't come from the classic supplier-OEM delivery chain model, but Tesla makes as much as they can on their own. It saves them a bunch of money, both when it comes to the profit margin of the supplier, and being at the whims of their supplier, but it is nasty for the customers when there simply is no parts OEM that one could go to when the vehicle manufacturer goes out of business or refuses to support the car any further.
> Where you'll get into trouble is when you need replacement computer bits
Oh hell yes. New EU law is particularly to blame here. OBD diagnosis always was nasty enough, you virtually always need to buy expensive diagnosis software and hardware (e.g. Mercedes XENTRY, VW ODIS, BMW ICOM)... but the newest requirements enforce live digital signatures and anti-tamper checks. Nasty as hell. And the buses itself... it's no longer just one CAN bus doing everything, not since the Kia Boys, it's multiple buses of different speeds, some using encryption on the wire, all making diagnosis, troubleshoots and repairs much more difficult than it used to be.
And that is before getting into the replacement parts issue itself that you wrote up.
> Starting up a car brand is orders of magnitude worse.
Tesla did it, and is more valuable than most other car brands added together. They had a novel product: a good EV that was fun to drive. Is that a unique situation? Could a truly autonomous car launch do it?
Your arguments make sense in themselves, but maybe underestimate the revolutionary value that a level 4 car would provide.
> Tesla did it, and is more valuable than most other car brands added together.
Half of Tesla's value is hopium, the rest of it is pure trust in that the current government will continue propping Elon up (even if he personally ran afoul of the Dear Leader). A lot of the promises Elon made, particularly when it comes to FSD, had to be tracked back and I don't see them ever coming to fruition - at least not for the cars that don't have LIDAR hardware.
>Waymo benefits from Google's unparalleled geospatial data.
How much of Waymo's training data is based on LIDAR mapping versus satellite/aerial/street view imagery? Before Waymo deploys in a new city, it deploys a huge fleet of cars that spend months of driving completely supervised, presumably to construct a detailed LIDAR map of the city. The fact that this needs to happen suggests Google's geospatial data moat is not as wide as it seems.
If LIDAR becomes cheap, you could imagine other car manufacturers would add it cars, initially and ostensibly to help with L2 driver aids, but with the ulterior motive of making a continuously updated map of the roads. If LIDAR were cheap enough that it could be added to every new Toyota or Ford as an afterthought, it would generate a hell of a lot more continuous mapping data than Waymo will ever have.
> Before Waymo deploys in a new city, it deploys a huge fleet of cars that spend months of driving completely supervised, presumably to construct a detailed LIDAR map of the city.
Not entirely true. From their recent "road trips" last year, the trend is they just deploy less than 10 cars in a city for a few weeks (3-4 weeks from what I recall) for mapping and validating. Then they come back after a few months to setup infrastructure for ride hailing (depot, charging, maintenance, etc.) and start service.
> difference between suborbital flight and putting a payload in orbit. What looks like a next logical step actually takes 10X or more effort, scale, and testing.
But suborbital flight and payload in orbit is much less of a difference than you might think.
The delta V is not that significantly different. Scale is almost the same, and a little bit more power and (second stage) your payload is now hurtling around the earth instead of falling like an ballistic missile which was what their suborbital predecessors are.
Suborbital ballistic "travel" beyond continental distances, is almost as expensive as orbital. If you can make it to the antipode, you're basically almost orbital.
Suborbital "trips" straight up, beyond the atmosphere, are very cheap.
>Waymo benefits from Google's unparalleled geospatial data.
That's true, and they have a huge headstart, but I wonder if all these cubesat companies can bring the price down on data enough that others will be able to compete.
Maybe. But Google has been there in a sensor laden car, overhead with an airplane, and buying all the access that is available in satellite imagery, and fusing that together in a continually updated model. Plus real time data from a billion maps and navigation users. I pity the fool going up against that.
I don't think Waymo is using Street View / satellite data to drive. They have to build an HD map using a special LIDAR-equipped vehicle before deploying in a new area.
Maybe their navigation system will be better than the competition due to real-time traffic data from Google Maps users, but I don't think it'll be so much better as to be an unbeatable advantage.
They use LIDAR maps in service areas, but might be using Street View data for training? (I imagine it would be really, really difficult to build a useful simulator with just SV imagery, but probably also quite valuable to have the variety of environments.)
Yeah, imagine having, say, two of these LIDAR sensors, each pointed towards the car's blind spots. Comma already does well with the car's built-in radar + vision on straight freeway runs, but can't reliably change lanes on its own. The built in blind spot detectors on most cars are a binary "there is/not a car present", which doesn't reliably determine if it's safe to actually do a lane change.
I was using cruise control on the highway yesterday and thinking: this is like very cheap very crude self-driving. And you know what? In its limited UNIX-like way, it's great: the car does a much better job of gradually injecting fuel than I, with my brick-like human foot, can do. Robot 1, human 0.
And from there it's easy to think: couldn't the car also detect white lines and stay within them? It doesn't have to be perfect; it can be cruise control++. If it errs a little, I can save it. But otherwise, this is a function I'd love to use if it was available, for a sub $1000 price point.
I think of Tesla autopilot as sophisticated cruise control. Can perform most driving tasks better than I can, saves a lot of cognitive work, still needs close of my 100% attention.
The intention of my comment (possibly unclear) was to say: I know we can do self-driving very well very expensively. But what can we do extremely cheaply?
Like the difference between "what can do we with an LLM on my maxxed-out laptop with an RTX 5090 card" vs. "what can we do with a mac mini." Self-driving car version.
This $200 MicroVision lidar is a short range lidar that produces a really fuzzy point cloud. At best, it can be used for parking. It's unlikely to help self driving cars much at all, much less "reshuffle auto sensor economics".
The mind salivates at the idea of sub-$100 and soon after sub-$10 Lidar. We could build spatial awareness into damn near everything. It'll be a cambrian explosion of autonomous robots.
The next headline will be that it also damages human retinas.
It's not safe just because it's infrared. And the claims that it's safe because of the exposure time is highly questionable, would you be okay with that for any other laser?
There is complains that some Volvo cars damaged iPhone cameras. It’s not even clear if Apple takes those under warranty. We’ve seen car review YouTubers that got their iPhone camera sensors damaged captured (by a second camera) while reviewing
One highlight from the video, he says most cameras are fine, it's just iphones that don't have a very good IR filter. Which sounds correct, in my experience most cameras have pretty substantial IR filters that have to be removed if you want to photograph IR.
I also wonder if the smaller sensor size on phones contributes, since the energy is being focused onto a smaller spot.
Either way, for that to happen he was filming the LIDAR while active, for a decent amount of time, from right next to the car. I assume under normal conditions it wouldn't be running constantly while the vehicle is stationary?
Is it possible that the iPhone filters are weaker due to FaceID requirements? I seem to recall that FaceID (and similar systems, like Windows Hello) depend on IR to get a more 3D map of the face, so it'd make sense that they want to be more sensitive in that range.
Laptops aren't generally being used in the same areas as cars though, so you wouldn't expect to see as many cases involving Windows Hello compatible laptops/cameras.
Are the eyes really "no better" in this scenario? From the above article it seems we tuned the behavior to the eye specifically (but not necessarily image sensors):
> Moving to a longer wavelength that does not penetrate the human eye allows new lidars to fire more powerful pulses and stretch their range beyond 200 meters, far enough for stopping faster cars. Now a claim of lidar damage to the charge-coupled-device (CCD) sensor on a photographer's electronic camera has raised concern that new eye-safe long-wavelength lidars might endanger electronic eyes.
> Producers of laser light shows are well aware that laser beams can damage electronic eyes. “Camera sensors are, in general, more susceptible to damage than the human eye,” warns the International Laser Display Association
"doesn't penetrate the human eye" seems a bit hand wavy, but I take it to mean "these length pulses in this wavelength are tuned to have the power not be enough to damage the eye". Camera lenses may not have the same level of IR filtering/gathering area or, if they do, there is nothing implying the image sensor has the exact same tolerances as the inside of the eye. From the same:
> Sensor vulnerability to infrared damage would depend on the design of the infrared filters
A heater usually damages the eyes through drying out/heating up the outside layer with constant high intensity, not by causing damage to the retina (post filtering). https://hps.org/publicinformation/ate/q12691/
> Furthermore, since the eye blocks the IRR, the eye begins to overheat leading to eye damage and possible blindness. Because of this, you should not look at the heater for an extended period of time.
Enough intensity of any wavelength is enough to damage any camera or eye of course, but the scenario here seems to be built around that question for the eye. Similarly, I've heard of Waymo's causing 6 mph accidents but no reports of eye damage from any car LiDAR. Despite that, in the above YouTube clip Marques Brownlee actively shows his camera being clearly damaged as its moved around.
> The biggest concern is not photographic cameras but rather the video cameras mounted on autonomous cars to gather crucial information the cars need to drive themselves.
So they don't care if that breaks my phone camera? Wtf?
Is there any deeper study on long term effects regarding retinal damage?
I would imagine, even with safe dosages, there would be some form of cumulative effect in terms of retinal phototoxicity.
More so if we consider the scenario that this becomes a standard COTS feature in cars and we are walking around a city centre with a fleet of hundreds of thousands of these laser sources.
Some lidar units simply use the wavelength that the human eye is opaque to.
The grandparent comment is about camera lenses with little to no near infrared cutoff filter. Some older iPhones were like that and that was the original breaking story.
Absolutely, and is a major cause of cataracts. Somewhat near 100% of people with lenses in their eyes will get cataracts eventually if they are ever exposed to unfiltered sunlight.
I remember those old cellphones with weak IR filters. It was a scandal because light clothing turns out to be more transparent to IR than to visible light so they were acting as a sort of clothing "X-Ray" in bright light. Creepers on the Internet tried to start a whole new genre of porn but were shut down in a hurry by cellphone manufacturers adding robust IR filters on the next generation of smartphones.
Shame that perverts had to ruin that for us, it was kinda neat to point a TV remote as the camera and see the bulb light up.
The short-range stuff is already $150-300 per unit. If you're thinking indoor robots that's already technically feasible. Over 25% of all Chinese cars being produced today have LiDAR.
Even mid-range sensors used in ADAS systems only cost $600-750. The long-range stuff that's needed for trucking or robotaxis is $1,500–6,000
There are already very good sub-$100 lidars, especially for 2D since they were made en masse for vacuum cleaners. E.g. the LD19 or STL-19P as they're calling it now for some reason. You need to pair them with serious compute to run AMCL with them, plus actuation (though ST3215s are cheap and easy to integrate now too) and control for that actuation which also wants its own compute, plus a battery, etc. the costs quickly add up. Robotics is expensive regardless of how cheap components get.
True, you have to go up to $120 for the 25m version, or $450 for Unitree's L2 which gets 30m in 3D. That's about as much you could possibly ever need unless you're making high speed vehicles that need more reaction time. In which case you probably shouldn't be relying on the cheapest thing on the market :)
Even back when Snowden was current news, we'd reached the point where laser microphones could cover every window in London for a bill of materials* less than the annual budget of London's police force.
* I have no way to estimate installation costs, but smartphones show that manufacturing at this scale doesn't need to increase total cost 10x more than the B.o.M.
People saying LIDARs can't recognize colors or LIDARs can't take pictures don't know what they are talking about.
They're just fancy cameras with synced flashes. Not Star Trek material-informational converting transporters. Sometimes they rotate, sometimes not. Often monochrome, but that's where Bayer color filters come in. There's nothing fundamentally privacy preserving or anything about LIDARs.
I don't know what I'm talking about, but isn't the wavelength of the laser pretty limiting to the idea of just slapping a Bayer color filter on? Like, if the laser is IR (partly so they're not visually disrupting all the humans around them), the signal you get back doesn't the visual spectrum sections that you'd need to get RGB right?
I’d definitely feel much better if most cameras in the world were replaced by LIDAR. I feel like it would be much tougher to have a flawless facial recognition program with LIDAR alone
Gait recognition is almost entirely hype. Sure it works to tell the difference between n = 10 people but so what, you can tell the difference between a group of 10 people by what kind of shoes they are wearing.
Then you combine it with some other technique, eg tracking daily routes of individuals, to lower the error rate. You only need a handful of bits to distinguish all inhabitants of the average city. But imho that error rate would likely be low enough for some judge to authorize more invasive surveillance of suspects thus identified.
The minute internet became widespread it was game over.
Pros and cons. :/
It'll never happen, but we need a bill of rights for privacy. The laypeople aren't well-versed or pained enough to ask for this, and big interest donors oppose it.
Maybe the EU and states like California will pioneer something here, though?
Edit: in general, I'm far more excited by cheap lidar tech than I am afraid of the downsides. We just need to be vigilant.
Lidar doesn’t really give you much to “see”, just shape and distance…so I’m a bit confused how it can be used for invasive surveillance, do you mean when fused with vision input it somehow allows it to infer more privacy stuff?
Medical, banking and insurance are three industries that the European data privacy watchdogs are much more strict about because of the potential for damage.
I'd say the numbers listed here prove the GPs point of poor enforcement. The largest fine is roughly 0.97% of Meta's 2023 revenue, the equivalent of a $600 fine for somebody making 60k / year. It's a tiny-tiny cost of doing business at best, definitely not a deterrent, given Meta's blatant disregard for GDPR since then.
> the equivalent of a $600 fine for somebody making 60k / year
I don't know about you, but on that income I would certainly not brush off such a fine as a "cost of doing business". Would it cause me financial trouble, or would it force me to sacrifice other expenses? Absolutely not. But would I feel frustrated at having to pay it, feel stupid for my mistake, and do my best to avoid it in the future? Absolutely yes.
My bad, a better analogy would be a dealer making 60k / year selling drugs, gets caught by police and is fined $600. I wouldn’t expect them to change much.
1% of Meta's global revenue is a tiny-tiny cost of doing business? At that point, I think I can stop even trying to argue here. It's a massive fine any way you put it. Especially when you consider the ceiling hasn't been reached and non compliance is more and more costly by design.
Their net profit was $60billion in 2024. This is peanuts. It can fluctuate by multiples of this fine in a month, depending on whether or not they've had a bad or good month, nevermind year. This pretty much is just a cost of doing business.
The interesting part is that it keeps going up. You seem to believe we have somehow reached a cap where Meta can just expense it as a cost of doing business. That's not how European law works. The fine maximum is far higher and repeated non compliance keeps making the fines higher and higher. It's a ladder not a sizing precedent.
Unfortunately it doesn't in practice. Meta's total revenue since 2018 when GDPR came into force is just shy of $1T. Even with all the smaller fines combined, the total amount of GDPR related fines is in the range of $3B. It's a rounding error.
There isn't a trend of increasing fines, nor has any fine even reached the cap, let alone applied multiple times for the recurring violations. Even more with the current US administration's foreign policy towards the EU.
While GDPR as a law is fine, with the exception of enforcement limitations, enforcement so far has been a complete joke.
Maximum GDPR fine is 4% of global revenue in the previous year. If a company has 30% profit margin then they can, in theory, treat is as a cost of doing business, indefinitely.
It's 4% per fine. Each violation is a fine and Meta owns multiple companies that can be fined. But 4% of global revenue already can't be treated as just a cost of doing business. Their shareholders would murder them.
Humanity has never known a world without surveillance. Responsibility cannot exist without being watched. Primitive tribes lived under the constant eye of the group, and agricultural eras relied on the strict oversight of the clan. Modern states simply adopted new tools for an ancient necessity. A society without monitoring is a society without accountability, which only leads to the Hobbesian trap of endless conflict.
Mass surveillance is a relatively recent development. Dense urban civilizations are not. And yet their denizens have not historically devolved into a “nasty, brutish, and short” existence. In fact, cities have been centers of culture and learning throughout history. How does this square with your theory?
The 19th century was the true cradle of mass surveillance. Civil registration, property tracking, and institutionalized police forces provided the systemic oversight required to manage dense urban life. These administrative tools served as the analogue version of digital monitoring to ensure every citizen remained known and categorized. Cities thrived as centers of culture only because these new forms of visibility prevented the Hobbesian collapse that anonymity would have otherwise triggered.
And what about all of the previous ~40-50 centuries where cities were centers of learning and art and not Hobbesian hell holes? Ur is slightly older than the 19th century, I believe.
And note that there is evidence for cities of tens of thousands of inhabitants from 3000 BCE, while Rome reached 1 000 000 residents by 1CE. Again, without becoming some Hobbesian nightmare.
None of those things are remotely comparable to the surveillance we're talking about. There's a world of difference between, "My city knows who owns what properties and also we have a police force", and "Western intelligence agencies scoop up every bit of data they can grab about anyone on the planet and store it forever"
In my country it wasn't until the late 19th century that someone had the balls to stop going to church on Sunday. It was a huge scandal at the time but it all worked out in the end.
Humans have always done mass surveillance on eachother. You don't need technology for that.
At no point in time before this era was it possible for a random bureaucrat to have a reasonably comprehensive list of everyone in a country who attended church yesterday.
This is a reduction to absurdity. Those old societies you cite didn't actively surveil with the goal of micromanaging people's daily lives the way that modern ones do.
Rural surveillance was far more suffocating because every single action was subject to the community gaze. This is exactly why classic literature frames the journey to the city as a liberation from the crushing weight of the village eye. The idea of the peaceful countryside is a modern utopian fantasy that ignores how ancient clans dictated every aspect of life including marriage and death. Modern Homeowners Associations prove that localized oversight is often the most intrusive form of management. Ancient society did not just monitor people; it owned their entire existence through inescapable social visibility.
"It was always shit everywhere" is revisionist history born out of the fantasy of statists looking to justify the modern (administrative) enforcement state.
While the lack of anonymity in small towns certainly puts a damper on one's ability to deviate too far from social norms, the list of things and subject that could get you subjected to government violence without creating a victimized party was infinity shorter. Things that get state or state deputized enforcers on your case today were matters of "yeah that's distasteful, he'll have to settle that with god" or it would come back to bite you when something happened 150+yr ago because society did not have the surplus to justify paying nearly as manny people to go around looking for deviance that could be leveraged to extract money. These people had way more practical day to day freedom to run and better their lives than we do now, if constrained by the fact that they had substantially less wealth to leverage to that effect.
> Modern Homeowners Associations prove that localized oversight is often the most intrusive form of management
And they almost exclusively deal in things that historical societies didn't even bother to regulate.
You're beyond delusional if you think running afoul of HOA is worse than running afoul of the local, state or federal government. Yeah they can screech and send you scary letter with scary numbers but they don't get the buddy treatment from courts that "real" governments do (to the great injustice of their victims) and their procedural avenues for screwing their victims on multiple axis are way more limited.
Seriously, go get in a pissing match with a municipality over just where the line for "requires permit" is and get back to me. Unless you want to do something that is more than petty cosmetic stuff and unambiguously in violation of the rules a HOA is a paper tiger for the most part (not to say that they don't suck).
Your reaction actually proves the point. Aggression thrives in anonymous spaces because the lack of oversight removes the weight of accountability. When people feel unobserved, they quickly abandon the social friction that once held tribes and clans together. You are essentially providing a live demonstration of why a society without any form of monitoring inevitably slides into the Hobbesian trap.
I don't think a random internet comment proves anything about society at large.
People don't hesitate to be aggressive even when they're not anonymous and there's a threat of accountability - see, all crime, or people just acting shitty toward others.
Mass surveillance does not cause everyone to magically get along.
History shows that whenever surveillance gaps appear, chaos follows. The explosion of crime during early urbanization was the specific catalyst for the creation of modern police forces because traditional social bonds had failed to provide oversight in growing cities. Japan maintains its safety through a deep-rooted culture of mutual neighborhood monitoring that leaves little room for anonymity. Even China successfully quelled the violent crime waves of its early economic boom by implementing a sophisticated surveillance network.
Police forces nor "neighborhood monitoring" are equivalent to mass surveillance though.
Anyway I'm curious why - despite having less anonymity than at any point in history, at least from the perspective of law enforcement - we still see high crime rates, from fraud to murders?
I've always wondered if Tesla's issues with FSD were a sensor problem or an intelligence problem. I think Tesla's claim is that when they look at accident footage, it is clear to a human how the car could have avoided the accident, and thus, if FSD was more intelligent, the accident could have been avoided. Is this reasoning wrong?
I personally find it convincing that the problem with self-driving is mostly that the models aren't intelligent enough, and that adding LiDAR wouldn't be enough to achieve the reliability required. But I don't know, I don't really work in that field so maybe engineers who have more experience with self driving might say otherwise.
It is easy to underestimate how much one relies on senses other than vision. You hear many kinds of noises that indicate road surface, traffic, etc. You feel road surface imperfections telegraphed through the steering wheel. You feel accelerations in your butt, and conclude loss of traction from response of the accelerator and motion of the vehicle. Secondly, the human eye has much more dynamic range than any camera. And is mounted on an exquisite PTZ platform. Then turning to the model -- you are classifying obstacles and agents at a furious rate, and making predictions about the behavior of the agents. So, in part I agree that the models need work, but the models need to be fed, and IMHO computer vision is not a sufficient sensor feed.
Consider an exhaust condensation cloud coming from a vehicle's tail pipe -- it could be opaque to a camera/computer-vision system. Can you model your way out of that? Or is it also useful to do sensor fusion of vision data with radar data (cloud is transparent) and others like lidar, etc. A multi-modal sensor feed is going to simplify the model, which in the end translates into compute load.
> I've always wondered if Tesla's issues with FSD were a sensor problem or an intelligence problem
Even if it’s an intelligence problem, it’s possible that machine intelligence will not get to the point where it can resolve anytime soon, whereas more sensors might circumvent the issue completely. It’s like with Musk’s big claim (that humans use camera only to drive); the question is not if a good enough brain will be able to drive vision-only, but if Tesla can make that brain.
maybe? But also LiDAR just gives a more complete picture of what is around the car. I think this is supported by how many miles waymo cars run unsupervised vs Tesla.
I am skeptical that tesla has this solved but interested in seeing how it goes when as they move to expand their robotaxi service.
Some problems are simply undecideable: if for identical inputs the desired output varies wildly, you simply need more information. There is no algorithm that will help you.
Sensors or intelligence, at the end of the day it’s an engineering problem which doesn’t require pure solutions. Sometimes sensors break and cameras get covered in mud.
The problem is maintaining an acceptable level of quality at the lowest possible price, and at some point you spend more money on clever algorithms and researchers than a lidar.
Tesla is adding radar and I predict before long it will add LiDAR because that's the only way to get to Level 3, which is a requirement for moving forward in California
They're not reading the actual FCC document then, which says:
Strategy: The move brings Tesla's sensor approach closer to competitors like Ford, GM, and Rivian, who utilize multi-modal systems (cameras plus radar) for their driver-assistance features.
Potential: This 'HD radar' could provide critical redundancy and data needed for achieving higher levels of driving automation and improving system performance in all conditions.
Low cost, sub $200 automotive grade LIDAR sensors are already available.
Cepton Technologies offers Nova [0], Nova-Ultra [1] sensors both at a sub-$100 price point [2]. These feature a 120°(H) x 90°(V) FOV at 50m, with 2.7M points per second sampling.
Velodyne introduced Velabit in 2021, for $100. Boasting 100m range and a 60-degree horizontal FoV x 10-degree vertical FoV.
The article claims that:
> What distinguishes current claims is the explicit focus on sub-$200 pricing tied to production volume rather than future prototypes or limited pilot runs.
which is simply not true. Cepton (currently offering) and Velodyne (acquired by Ouster in 2023) have done this for years.
99% of LiDAR production is just 4 Chinese companies. Yes low-range systems are already at the $150-300 range, but MicroVision is promising to produce this in the Washington.
Basically they're saying "we can catch up to China by 2028/2029" ||so please subsidize us||
Cepton primarily operates B2B, as B2C demand for specialized LIDAR like this is pretty low. Anything you find on eBay is either a leftover dev kit or salvage. This is pretty much the case for MicroVision, Ouster etc.
Interestingly, there have been people in the LIDAR industry predicting costs like this for many years. I heard numbers like $250 per vehicle back in 2012 [1]
Of course, ambitious pricing like this is all about economies of scale - sensors that are used in production vehicles are ordered by the million, and that lowers the costs massively. When the huge orders didn't materialise, the economies of scale and low prices didn't materialise either.
Also 'Luminar Technologies, a prominent U.S. lidar manufacturer, filed for Chapter 11 bankruptcy in December 2025' LIDAR is useful in a small set of scenarios (calibration and validation) but do not bet the farm on it or make it the centre piece of your sensor suite.
Also, MicroVision, the company in OP's article bought the IP from Luminar. This feels like a circular venture capital scam. Luminar originally went public via SPAC and made a bunch of people very wealthy before ultimately failing.
This is very wrong.
LIDAR scanners have revolutionized surveying by
enabling rapid, high-precision 3D mapping of terrain and infrastructure, capturing millions of data points per second. LIDAR can penetrate dense vegetation, allowing accurate, ground-level, mapping in forested or obstructed areas. Drone mounted LIDAR has become very popular. Tripod mounted LIDAR scanners are very commonly used on construction sites. Handhels LIDAR scanners can map the inside of buildings with incredible accuracy. This is very commonly used to create digital twins of factories.
Lidar is critical for any autonomous vehicle. It turns out a very accurate 3D point cloud of the environment is very useful for self driving. Crazy, I know.
I work with programs approaching L3+ from L2, with the requirement that the system works for 99% of roads (not tesla before people start fixating on that).
We find that the cases where lidar really helps are in gathering training data, parking, and if focused enough some long distance precision.
None of these have been instrumental in a final product; personally I suspect that many of the cars including lidar use it for data collection and edge cases more than as part of the driving perception model.
Sort of; accidents are the absolute core of the product. They are rare, but they are the focus of the design.
By edge cases I mean scenarios like the lights going out in an underground garage; low vision due to colourful smoke or dust, or things like optical illusions or occlusion that a human would just need to remember.
Lidar can help, but not really enough to be worth it.
Waymo is the best current autonomous driving system and Waymo uses LIDAR. This is because LIDAR is an incredibly effective sensor for accurate range data. Vision and Radar range data is much less accurate and reliable.
Waymo used LIDAR in the realtime control loop. It combines LiDAR, camera, and radar data in real time to build a 3D representation of the environment, which is constantly updated.
I fundamentally don't trust any level 4 system that doesn't use LIDAR
Yes, I am aware of waymo... What they do is impressive. However they don't have a product that works for all highways yet, that's the space I work in, and we have no real fixation on lidar... It's nice but not a requirement, and hard to justify the cost unless you can make sales because of it (and there are some places where this is the case, but not everywhere)
You don't need the mm precision of lidar very often; we find that it offers nothing at speed over radar; and in tight manoeuvres the cameras we need for human park assist and ultrasonics do well enough.
It in not more accurate; but it is more precise, but that doesn't really matter. (Radar gives you relative speed directly, this is more important than a very precise point at highway speeds).
Waymo is level 4. I think currently it is nearly impossible to make a level 4 system as safe as Waymo without Lidar. Maybe new 4d imaging radar or THz radar could change this. Sensor modalities have physics-based limitations, current camera+radar isn't sufficient for L4.
Like Waymo? (https://dmnews.co.uk/waymo-robotaxi-spotted-unable-to-cross-...) 17 years after betting the farm on LIDAR the solution fails to navigate a puddle. Sorry but they bet on the wrong technology, Tesla has overtaken them with multi camera and NN solution.
Your conclusion from a single incident is a bad inference. One vehicle getting confused by a puddle (likely a sensor fusion edge case or mapping artifact, not a fundamental LIDAR failure) doesn't indict the technology. Tesla's cameras have produces vastly more failures.
Waymo has driven tens of millions of autonomous miles with a serious injury/fatality rate dramatically lower than human drivers. The actual data shows the technology works. Tesla FSD still requires active driver supervision and is not legally or technically a robotaxi system. Comparing them as if they're at parity is wrong.
LIDAR gives direct metric depth with no inference required. Camera-only systems must infer depth from 2D images using neural networks, which introduces failure modes LIDAR doesn't have. Radar is very valuable when LIDAR and cameras give ambiguous data.
What metrics has Telsa overtaken Waymo? Deployed robotaxi revenue miles? No. Disengagement rates? No published comparable data. Safety per mile in driverless operation? No.
A Tesla wouldn't stop for a puddle. Also its not locked to a small geofenced area (people have driven coast to coast without a single intervention on FSD including parking spot to parking spot) when I can buy a Waymo vehicle that does this then Waymo would have caught up with Tesla.
Your puddle example is utterly irrelevant. Tesla's are notorious for phantom breaking. Robotaxis are very much locked to tiny geofenced areas. Some even shaped like a penis because Musk is such a child.
"people have driven coast to coast without a single intervention on FSD including parking spot to parking spot"
I find this claim very dubious. Prove it. Teslas never drive empty for a very good reason.
Err they have lots of Model Ys in Austin as Robotaxis right now with no drivers. I guess this is also 'dubious'. Look it's clear you have a huge bias I would urge you to read up on https://grokipedia.com/page/List_of_fallacies otherwise your emotional responses will blind you to reality.
"hey have lots of Model Ys in Austin as Robotaxis right now with no drivers"
They do not. They have a very small number of them open to a select number of people, not the general public. And they are limited to even smaller areas. You need to understand that Musk is NOT an engineer, he is more of a con man desperate to inflate tesla stock price. If he says self driving cars don't need LIDAR then they must actually need it.
Who should I believe a random poster on hackernews who has likely an average salary or Elon Musk who is the richest man in the world and create multiple trillion dollar companies......hard one!
Whats wrong with grokipedia its a bit less woke/far left wing, more balanced.
You are just mindlessly regurgitating the lies of Musk and using an "appeal to wealth" to justify not analyzing them. He has been lying about self driving since 2016.
For nearly a decade Elon Musk has claimed Teslas can truly drive themselves. They can’t. Now California regulators, a Miami jury and a new class action suit are calling him on it.
Around a decade ago the nascent LIDAR industry boomed and dozens of startups emerged out of nowhere all racing to make cheap automotive grade LIDAR, and here we are.
Of course MicroVisiom is only claiming their LIDAR to be suitable for advanced driver assist, but ADAS encompasses a wide array of capabilities: basically everything between cruise control and robotaxis, so there's no definition of how much LIDAR you need to do the job, just however much you feel like. Tesla feels like none at all.
Interesting to see the cost curve drop ... this always changes the market.
I have been watching the sensor space for a while. Cheap LIDAR units could open up weird DIY uses and not just cars. ALSO regulatory and mapping integration will matter. I tried to work with public datasets and it's messy. The hardware is only one part! BUT it's exciting to see multiple vendors in the space. Competition might push vendors to refine the software stack as well as the hardware. HOWEVER I'm keeping an eye on how these systems handle edge cases in bad weather. I don't think we have seen enough data yet...
> Cheap LIDAR units could open up weird DIY uses and not just cars.
Interestingly, there are already some comparatively cheap LIDAR units on the market.
In the automotive market, ideally you need a 200m+ range (or whatever the stopping distance of your vehicle is) and you need to operate in bright direct sunlight (good luck making an eye-safe laser that doesn't get washed out by the sun) and you need more than one scanning plane (for when the car goes over bumps).
On the other hand, for indoor robotics where a 10m range is enough and there's much less direct sunlight? Your local robotics stockist probably already has something <$400
Neato from San Diego has developed a $30 (indoor, parallax based) LIDAR about 20 years ago, for their vacuum cleaners [1].
Later, improved units based on the same principle became ubiquitous in Chinese robot vacuums [2]. Such LIDARs, and similarly looking more conventional time-of-flight units are sold for anywhere between $20-$200, depending on the details of the design.
Sounds like the quality isn't all that great but LD06 sensors look like they're about $20 and someone who works on libraries about this suggested the STL27L which seems to be about $160 and here's an outdoor scan from it: https://sketchfab.com/3d-models/pidar-scan-240901-0647-7997b...
Not sure if the ld06 is a scanner like this or if it's just a line (like you'd use for a cheaper robot vac).
I wonder if Comma.ai will ever be open to incorporating this into openpilot.
I always thought the argument that humans are adequate drivers and hence only cameras was not great. Why not actually be better than humans at sensing and driving?
I'm not well versed into RF physics. I had the feeling that light-wave coherency in lasers had to be created at a single source (or amplified as it passes by). That's the first time I hear about phased-array lasers.
The beam is split and re-emitted in multiple points. By controlling the optical length (refractive index, or just the length of the waveguide by using optical junctions) of the path that leads to each emitter, the phase can be adjusted.
In practice, this can be done with phase change materials (heat/cool materials to change their index), or micro ring resonators (to divert light from one wave guide to another).
The beam then self-interferes, and the resulting interference pattern (constructive/destructive depending on the direction) are used to modulate the beam orientation.
You are right that a single source is needed, though I imagine that you can also use a laser source and shine it at another "pumped" material to have it emit more coherent light.
I've been thinking about possible use-cases for this technology besides LIDAR,. Point to point laser communication could be an interesting application: satellite-to-satellite communication, or drone-to-drone in high-EMI settings (battlefield with jammers). This would make mounting laser designators on small drones a lot easier. Here you go, free startup ideas ;)
In principle, as the sibling comment says, you could measure just the phase difference on the receiver end. The trick is that it's much harder for light frequencies than radar. I'm non even sure we can measure the phase etc of a light beam, and if we could, the Nyquist frequency is incredibly high - 2x frequency takes us to PHz frequencies.
There might be something cute you can do with interference patterns but no idea about that. We do sort of similar things with astronomic observations.
A phased array is an antenna composed of multiple smaller antennas within the same plane that can constructively/destructively aim its radio beam within any direction it is facing. I'm no radio engineer but I think it works via an interference pattern being strongest in the direction you want the beam aimed. This is mostly used in radar arrays though I suppose it could work with light too since it is also a wave.
I think about it like a series of waves in a pool. One end has wave generators (the lasers) spaced appropriately such that resulting waves hitting the other end interfere just right and create a unified wavefront (same phase, amplitude, frequency).
Not an expert, but main challenges with laser coherency are present when shaping the output using multiple transmitters.
For lidar you transmit a pulse from a single source and receive its reflection at multiple points. Mentioning phased array with lidar almost always means receiving.
What are the chances some non-trivial proportion of the millions of cars on the road will not have their LIDAR designed, built, installed or calibrated correctly? I suspect this is going to be a recognized public health issue in a decade or two. (It will likely be an issue well before that, but unrecognized...)
There is an incentive to use higher power. Push the edge of safety limits to achieve higher performance from lower cost devices, for example.
It occurs to me there is an opportunity here. Passive lidar detectors sampling fleets of vehicles in the real world, measuring compliance and detecting outliers, would be interesting. A well placed, stationary device could sample thousands of vehicles every day. Patterns will emerge among manufacturers. Failure modes will be seen.
Cursory queries on this reveal nothing. Apparently, no one is doing this. We're all relying on front end certification and compliance. No thought given to the real world of design flaws, damage, faulty repairs, unanticipated failure modes, etc.
Apparently there are lidar jammers. I bet those are rigorously compliant with Class 1 safety regs... No one manufacturing those is ever going to think; "hey, why not a 50W pulse train?"
For everyone of those safety measures to be intentionally bypassed or ignored, the numbers are assuredly non-zero.
But is it going to raise to a level of concern? I don't think we're going to see a ton of cars with blinding lasers installed, unless they are installed to intentionally blind people.
If you have used face I'd, or someone has used a face detection on modern smart phone on you, or if you've pulled up to a modern intersection, you've been blasted with lasers. It may come one day where that's the largest concern but today it's not my primary problem and investing in FUD isn't going to bring any benefits.
That's a lot of qualifiers. And replace "humans" with "cameras" and I'm reminded that despite their well-intentioned efforts Volvo has failed there already.
It really isn't though. It's how you do something correctly. Drill into the details of just about any system and you'll see there's a lot of assumptions based on the layers above and below.
A good safety system requires multiple of these failures to occur together to become unacceptable in risk.
I get pretty ticked when people shine laser lights in my direction regardless of their intensity, so I'm not too thrilled about the idea of invisible lasers hitting me square in the pupil without my knowledge.
There are laser measurers sold for a few buck on Temu. Robot vacuums sold for few hundred dollars have Lidars that map out the room in a seconds.
Is there any actual technical reason why automobile Lidar be expensive? Just combine visual processing with single point sampler that will feed points of interest and accurate model of the surroundings will be built.
Most spinning robovac LIDARs are 2D. Most solid state robovac LIDARs are like 8x8 array of laser pointers.
Automotive LIDARs are like, 128x64[px] for production models or 1920x1080[px] for experimental models with GbE and/or HDMI-equivalents-of-industry outputs. Totally different technologies.
Oh my god so many reasons. I don't feel like getting fully into it but that's kind of like asking why you can't use your kitchen scale to measure highway traffic as it drives over it.
I know that automotive parts of the standard requirement to withstand 80°C (or 120°C for military use). A robot vacuum working in a living room can probably be made cheaper because it does not have to face as harsh environments?
Also, range is probably a factor. In a living room, you probably need something like 20m max. You car should "see" farther.
Sure, these are the assumptions but silicon is silicon, copper is copper and solder is solder. They don't use easy melting electronics in vacuums and hardened stuff in cars, the tech is about the same unless it is supposed to work in highly radioactive environment. The plastics are different but car interiors are full of plastics, so its unlikely that the costs of temperature resistant plastics needed for this is more than a cupholder.
As for the range, again pretty powerful lasers are sold for sub 10SUD prices on retail. I am sure that there must be higher calibration and precision requirements as the distance increase but is it really order of magnitudes higher? 120 meters laser measurer with 1cm accuracy is 15 Euros on Temu and that thing has an LCD screen and a battery as a handheld device. How much distance do you actually need?
Vibrations are surely an issue with electromechanical systems but hardly with electronics. There are plenty of cheap electronic accessories for cars and you can observe that those keep functioning for years.
to add to the rest of the comments, a reliability standard also adds on cost. The scale is different, but compare a car bolt vs manned space mission craft's bolt.
@dang .... do these comments seem organic to you? old accounts with almost zero karma going out of their way to use the same verbiage to compliment waymo 18 minutes after an article gets posted? .... dead internet at work.
Please don't post like this. If you suspect something, please email us (hn@ycombinator.com) with links to specific comments. The guidelines are clear abut this:
Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.
Anytime a Tesla or Elon related article is posted it gets a barrage of negative comments usually FUD like. Any neutral or positive comment gets downvoted heavily. Bit suspicious to say the least, very clear pattern, they are not doing it very well should be a bit more nuanced.
There is no evidence of any such organised campaign. The critical comments we see against that company and person are generally from known, established HN users, and align with frequently-expressed sentiments among the general public. And the complaint is just as often made that "anything remotely critical" about that company and person is flagged. If posts about the topic are being downvoted and flagged, it's mostly because that person and company are in the news so frequently that most commentary about them is repetitive, sensationalist and uninteresting, and thus off topic for HN.
> pricing below US $200. That’s less than half of typical prices now, and it’s not even the full extent of the company’s ambition.
This means there are sensors available for like $500 or more. At 4 per car, this is still just $2000, which is a very reasonable cost add even for a midrange car.
And with price comparisons like this, I'm sure Chinese competitors aren't factored in, I'm sure the Chinese have stuff for cheaper.
So Affordable Lidar is not a limitation. Despite that, self-driving doesn't really exist outside of Waymo, which people take to assume that Lidar is their killer advantage, but with other cars having Lidar, I think that might not turn out to be the deciding facotr.
I'm not sure anyone today really thinks self driving hinges on the hardware. Comma does a surprisingly good job with very minimal hardware (in the form factor of an old Tom Tom!). The advantage is really the device's processing power (cramming enough compute in without making it crazy expensive) and the data that the manufacturer has about the environment and training data to handle edge cases. You can't just buy those things, because the people that have them would be your competitors.
The article is a bit muddy on what is hope and what is product. Can we _really_ buy a solid state lidar today? At what cost? When can I have it delivered?
The article starts out without saying it but my takeaway at the end is "Not $200" and "Not in the near future"?
I never understood why Tesla HAD to get rid of the Lidars. Expensive today sure, but can you imagine all that training data they missed out on? Technology has a way of becoming cheaper and cheaper. It seemed short sighted, even if at a loss, again, the training data.
If the pros of having a camera are monumental, then couldn't the video and lidar be combined to be even greater?
Because Tesla Clown-in-Chief asked if humans could drive with just visual input, why can't a Tesla? C-in-C conveniently ignored that, to begin with, humans have binocular vision, and his cars had none. Also conveniently ignored were the facts that human eyes have immense dynamic range, are self-cleaning, and can move to track objects of interest. On top of this, humans also have hearing, which helps gauge danger. Many of these things could be filled in by Lidar but since C-in-C apparently had a revelation from heaven, possibly caused by drugs, lidar had to go.
I really wish that companies would just sell their products instead of doing the business relationship 2-step. It is an unnecessary waste of time to sell product.
It looks like these sensors have just enough range to be effective for lidar terrain scanning. I would have bought a Movia S right now just to try it out.
Cameras alone can handle the vast majority of nominal driving scenarios, but the long tail of safety critical edge cases is where progress slows dramatically. Many of these cases are driven by degraded or ambiguous perception, which is where multi‑modal sensing, such as combining cameras with lidar, can reduce uncertainty. In adverse weather like fog or heavy rain, that reduction in uncertainty can translate directly into safer behavior, such as earlier and more confident emergency braking, even if no single sensor performs perfectly on its own
Biggest risk is that a beam steering element stops while the emitters are running. Basically impossible with a phased array emitter like the article discusses.
And you'd probably have to be staring into the laser at close range while it was doing that.
The laser beams usually aren't tiny points like your laser pointer. Several centimeters across is more typical, especially at typical road distances. Your pupil is very small in comparison.
The optical hazard calculations are a very early part of the design of a LIDAR system, and all of this does get considered. Or should anyway.
Biggest risks are for people involved in R&D, where beams may be static and very close to personnel.
This is not quite true. It depends what you're talking about. Automotive LiDAR sensor prices typically range from $150–300 today for standard units. Mid-range ADAS systems (Ls+/L3) sensors are about $600-750 and the long-range units used by robo-taxis like Waymos are about $1,500–6,000 or more per sensor
I still believe in Cameras. I have a comma.ai 3x and it works really well. Just get a thermal camera to deal with fog etc. Waymo has some of the same limitations with cameras that Comma and Tesla does.
There's no reason to believe in just cameras. Cameras are easily blinded by glare and have their efficacy drop dramatically when they get dirty. Having inexpensive lidar AND cameras is the best of both worlds. When it comes to safety and comfort, we shouldn't be trying to optimize for cost. If we figure out how to make cameras alone bulletproof in the future, great. But there's not where we're at today.
I think that supports most people’s viewpoint though. Visible light Cameras alone can ‘work’ but more sensors is of course better. You infrared example for instance.
The only reason not to have more sensors of different types is cost (equipment and processing costs). Those costs are coming down fast.
What makes you say radar is extremely expensive? Virtually every car from the last decade has at least one, many have two or more. They’re barely more than a PCB and a radar ASIC.
If you want to compete with LIDAR, you need high resolution 4D (range, velocity, azimuth, and height) RADAR. Those are usually phased arrays with expensive phase sensitive electronics, and behind that a chip that can do a lot of Fourier transforms very quickly.
The cheap RADAR devices you're talking about usually only output range and velocity, sometimes for a handful of rather large azimuth slices. That doesn't compete with LIDAR at all.
Below is one of the comments poster to original article, reading it makes me think that most of the whole article has been regurgitated by some AI:
>"This misleading article contains numerous factual errors regarding automotive lidar. Here are the most glaring:
There are multiple manufacturers, including Hesai, that use mechanical means for at least one scan axis and are already sold for a fraction of the "$10k - $20k" price noted by the author. Luminar itself built this class of scanners before going bankrupt.
Per Microvision's own website, the Movia-S does not use a phased array and also does not have a range anywhere near 200m.
Velodyne and Luminar do not even exist as companies anymore. Both have gone bankrupt and been acquired by competitors."
Is this Human safe at these volumes? There was a time you could get your feet sized by putting them into an X-ray box at the shoe store. Removed from stores once the harm was known.
Well, the energy levels used in those devices should be miniscule, and the wavelengths used are well studies. The problem with x-rays - was lack of studies on health effects, and regulations on those effects. I think, since that time, we've studies radiation (be it light, rf or other parts of spectrum) much more.
There is indeed a possibility that we're overlooking some bio-electromagnetic interaction effects; for instance now there is some evidence that led lights might not be harmless - but again, it's not the they affect biological structures somehow, but the lack of spectral components has some effects. It is an interesting topic to research. But, the lidar "should" be safe
The main damage risk from LIDAR is to retinal rods and cones. You just know some jerk is going to overclock his system and we know some people just don't care about the harm they cause so long as they get a benefit. As a combo that means I'll be wearing protective eyewear outdoors the day this tech comes to the roads.
I saw a Waymo in Seattle, today. If Waymo can get Seattle right, that gives me a lot of confidence that their stack is very capable of difficult road conditions.
Note: I have not had the pleasure of riding in one yet, but from what my friend in SJ says, it’s very convenient and confidence-inspiring.
Seattle probably isn't any harder than SF, other than the occasional weather event where the hills ice over and we get a bunch of funny (and scary) videos.
I took the Waymo from San Jose airport to home on the peninsula. It took the 101 highway back for the most part, driving very conservatively at 65-55 mph, and in the right most lane. It still has a few quirks though. When there aren't any cars around it will speed up to 65 mph, but at on-ramps, it will slow down to 55 and then speed up once past. It will get stuck behind slow drivers being in the right most lane and patiently follow them a few car length behind them. On the plus side, the lidar stack field of view as shown on the internal display seems to see pretty far down the highway.
Even more fury-inducing, they don't even have ultrasonic parking sensors on cars that have ultrasonic parking sensors. They disabled them to move to a vision-only stack that is no where near as accurate or as good and which categorically cannot tell a difference in ground truth has occurred in its blind spot. But hey, all _people_ need are two cameras, right?
Why wouldn't you trust a Telsa, millions of people let there Tesla drive them all over USA (not geofences like Waymo) without touching the wheel from parking spot to parking spot everyday. Have you tried it?
Maybe because of the multiple investigations Tesla has currently due to crashes, deaths, injuries, etc. all caused by "whoops our cameras were fooled by some glare/fog and accelerated into a truck/pole"
Those are mainly autopilot which people conflate with FSD, and high percentage are human caused accidents (auto pilot requires full attention and driver is liable).
Autopilot is Tesla’s brand name for adaptive cruise control with lane centering. This is a common feature available on a wide range of vehicles from nearly every major manufacturer, though marketed under different names (e.g., ProPilot, BlueCruise).
Drivers can and do misuse adaptive cruise control systems, sometimes with fatal consequences. Memes aside, there is no strong evidence that fatal misuse occurs more frequently by owners of Tesla cars than with comparable systems from other brands.
This perception reflects the Baader–Meinhof phenomenon, more commonly known as the frequency illusion. Nobody is collecting statistics for other brands, so it’s assumed the phenomenon doesn’t occur.
A similar pattern occurred with media coverage of EV fires. Except in this case, good statistics exist which prove the opposite: ICE vehicles catch fire more often than EVs.
> Why wouldn't you trust a Telsa, millions of people let there Tesla drive them all over USA (not geofences like Waymo)
I own a Tesla and paid about $10K for the full self driving capability a few years ago. Yeah, I would not trust a Tesla to drive me from airport to my house. There is a reason Tesla is still stuck at level 2 autonomy certification and not 3, 4 or 5.
I would agree for most Teslas on the road. However, the very latest (HW4) cars are significantly better at FSD where I would nearly trust it now. Most of those older (pre-2023?) cars will not have their hardware upgraded so they'll still have FSD that drives like an idiot!
Because it is not real autonomous driving? Being liable for software that you can neither verify nor trust is THE dealbreaker. Once Tesla says "We are liable for all accidents with FSD" with higher level autonomous driving this game changes. But Waymo is just way more reliable.
> millions of people let there Tesla drive them all over USA
There aren't a million Teslas with FSD active in the US. According to Tesla in their latest earnings report there are 1.1 million people worldwide with FSD.
I mean it doesn't. If you actually look at it comma.ai proves that level two doesn't require lidar. Thats not the same as full speed safe autonomy.
whilst it is possible to drive vision only (assuming the right array of cameras (ie not the way tesla have done it) lidar gives you a low latency source of depth that can correct vision mistakes. Its also much less energy intensive to work out if an object is dangerous, and on a collision course.
To do that in vision, you need to work out what the object is (ie is it a shadow) then you have to triangulate it. That requires continuous camera calibration, and is all that easy. If you have a depth "prior" ie, yes its real, yes its large and yes its going to collide, its much much more simple to use vision to work out what to do.
It's fair to point out that comma.ai is SAE level two system, however it's not geofenced at all, which is an SAE level 5 requirement. But really that brings up the fact that SAE's levels aren't the right ones, merely the ones they chose to define since they're the standards body. A better set of levels are the seven I go into more detail about on my blog.
As far as distinguishing shadows on the road, that's what radar is for. Shadows on the road as seen by the vision system don't show up on radar as something the vehicle will run into.
Your autonomy scale is pretty arbitrary and encodes assumptions about the underlying technology and environments the vehicle is supposed to implement and operate in.
The SAE autonomy scale is about dividing responsibility between the driver and the assistance system. The lowest revel represents full responsibility on the driver and the highest level represents full responsibility on the system.
If there is a geofenced transportation system like the Vegas loop and the cars can drive without a human driver, then that is a level 5 system. By the way, geofencing is not an "SAE level 5" requirement. Geofencing is a tool to make it easier to reach requirements by reducing the scope of what full autonomy represents.
Well he's also argued that just using CV reduces sensor contention and he claims it improves performance and release velocity, which is why they also got rid of radar and ultrasonic sensors. I am doubtful although it'll be interesting to see regardless
I wouldn't be surprised if this was a better solution. I think while radar might have a worse spatial resolution, it's depth perception, speed measurement capability, and general robustness to adverse weather might make it a better complementary sensor.
Lidar struggles with things like rain and snow way worse than cameras do.
Xpeng, Wayne, aiMotive to name three. Probably many others, who claim to use LIDAR but don’t actually rely on it. Because LIDAR is perceived as a prerequisite for autonomous safety, admitting to not needing it is a bad PR move — for now.
There is a massive technical difference between Vision first but with LiDAR redundancy vs No LiDAR at all that is Tesla approach. Those are not the same architecture. So claiming XPeng, Waymo, or aiMotive validate Tesla is technically misleading.
XPeng system is sensor fusion. It is not camera only. Waymo is even clearer. For them LiDAR is not optional. aiMotive has now started to market camera only, but its experimental, no production deployments.
Xpeng is abandoning sensor fusion. aiMotive has never bothered with sensor fusion. I never mentioned Waymo; unfortunately the AI gods at Apple auto-corrected me typing Wayve, as in Wayve Technologies Ltd.
Tesla FSD is not accurately described as a "no LIDAR at all" approach, and claiming it as such is technically misleading.
Yes, silly using just cameras, I mean humans have Lidar sensors, that why they can drive, why didn't new just copy that....oh wait.
It all seriousness though, Tesla are producing cyber cabs now which are 10th the price of Waymo's and can drive autonomously anywhere in the world. I think we can see where this is going. (Hint: not well for Waymo)
Also the article is speculative 'MicroVision says its sensor could one day break the $100 barrier'. One day...
Humans also don't have wheels, but we build objects with wheels. It is as if we can build objects that don't resemble humans for specific purposes. Crazy...
> Tesla are producing cyber cabs now which are 10th the price of Waymo's and can drive autonomously anywhere in the world.
Wait what? when did they actually enter mass production?
> I mean humans have Lidar sensors
Real time slam is actually pretty good, the hard part is reliable object detection using just vision. Tesla's forward facing cameras are effectively monocular, which means that its much much harder to get depth (its not impossible but moving objects are much more difficult to observe if you only have cameras aligned on the same plane with no real parallax)
Ultimately Musk is right, you probably don't need lidar to drive safely. but its far more simple and easier to do if you have Lidar. Its also safer. Musk said "lidars are a crutch", not because he is some sort of genius, Its obvious that SLAM only driving is the way forward since the mid 00's (of not earlier). The reason he said it is because he thought he could save money not having lidar. The problem for him is that he didn't do the research to see how far away proper machine perception is to account for the last 1% in accuracy needed to make vision only safe and reliable.
> Tesla's forward facing cameras are effectively monocular
Notably, human perception is effectively monocular in driving situations at distances of 60 feet or farther. It's best in the area where your limbs can reach.
Not mass production yet, but the first one rolled off the completed assembly line at giga texas last week
Sensor fusion is not far simpler, when the sensors disagree, and they will often, you have to pick which to trust.
It is amazing to see how many people here are confident they know the one true way to build autonomous systems based on nothing but wanting to confirm their biases
This is a weirdly tired counterpoint that Elon and Elonstans like to bandy about as if it's an apples to apples comparison. Humans have a weirdly ultra-high-dynamic-range binocular vision system mounted on an advanced ptz/swivel gimbal that allows for a great degree of freedom of movement, parallax effects, and a complex heuristic system for analyzing vision data.
The Tesla FSD system has... well, sure, a few more cameras, but they're low resolution, and in inconveniently fixed locations.
My alley has an occlusion at the corner where it connects to the main road: a very tall, very ample bush that basically makes it impossible to authoritatively check oncoming traffic to my left. I, a human, can determine that if I see the light flicker even slightly as it filters through the bushes, that the path is not clear: a car is likely causing that very slight change in light. My Tesla has no clue at all that that's happening. And worse, the perpendicular camera responsible for checking cross-traffic is mounted _behind my head_ on the b-pillar, in a fixed location that means that without nosing my car _into_ the travel lane, there is literally no way for it to be sure the path is clear.
This edge case is navigated near-perfectly by Waymo, since its roof-mounted lidar can see above and beyond the bush and determine that the path is clear. And to hit back on the "Tesla is making cheaper cars that can drive autonomously anywhere in the world": I mean, they still aren't? Not authoritatively. Not authoritatively enough that they aren't seeing all sorts of interventions in the few "driverless" trials they're doing in Austin. Not authoritatively enough when I have my Tesla FSD to glory. It works well enough on the fat part of the bell curve, but those edges will get you, and a vision only system means that it is extremely brittle in certain conditions and with certain failure modes, that a lidar/radar backup help _enhance_.
Moreover, Waymo has brought lidar development in-house, they're working to dramatically reduce their vehicle platform cost by reducing some redundant sensors, and they can now simulate a ground truth model of an absurd number of edge cases and odd scenarios, as well as simulate different conditions for real-world locations in parallel with their new world modeling systems.
None of which reads to me as "not going well for Waymo." Waymo completes over 450,000 fully autonomous rides per week right now. They're dramatically lowering their own barriers to new cities/geographies/conditions, and they're pushing down the cost per unit substantially. Yeah, it won't get to be as cheap as Tesla owning the entire means of production, but I'm still extremely bullish on Waymo being the frontrunner for autonomous driving for the foreseeable future.
Waymos are still making lots of errors that a human wouldn't (Stopping in middle of a road due to a puddle was a recent one https://dmnews.co.uk/waymo-robotaxi-spotted-unable-to-cross-...) 17 years after betting on LIDAR, I think Tesla is ahead now in most respects. It's could be wrong though we will probably know by the end of this year.
> My Tesla has no clue at all that that's happening. And worse, the perpendicular camera responsible for checking cross-traffic is mounted _behind my head_ on the b-pillar
It has a wide angle camera in front that you usually can never see outside service menu. It should cover that case.
> Yes, silly using just cameras, I mean humans have Lidar sensors, that why they can drive, why didn't new just copy that....oh wait.
Humans don't have wheels and cannot go 70MPH. Humans also don't have rear view cameras and cannot process video feeds from 8 cameras simultaneously. The point of these machines is to be better than humans for transportation. If adding LIDAR means that these vehicles can see better than humans and avoid accidents that humans do get into, then I for one want them in my vehicle.
The human brain is a product of millions of years of dealing with spatial problems for survival — and most individual humans are the product of thousands of hours of experience using it to navigate the physical world.
We're always getting closer at emulating this, but we're still a ways off from matching it.
Stereo based depth mapping is kind of bad, especially so if it is not IR assisted. The quality you get from Lidar out of the box is crazy good in comparison.
What you can do is train a model using both the camera and Lidar data to produce a good disparity and depth map but this just means you're using more Lidar not less.
>It all seriousness though, Tesla are producing cyber cabs now which are 10th the price of Waymo's and can drive autonomously anywhere in the world. I think we can see where this is going. (Hint: not well for Waymo)
This feels like a highly misleading claim that might technically be true in the sense that there are less restrictions, but a reduction in restrictions doesn't imply an increase in capability.
The comment about Waymo seems to be particularly myopic. Waymo has self driving technology and is operating as a financially successful business. There is no conceivable situation where the mere existence of competition with almost the same capabilities would shake that up. Why isn't it companies like Uber, who have significantly fallen behind, that are in trouble?
>Also the article is speculative 'MicroVision says its sensor could one day break the $100 barrier'. One day...
The brains (ai models) are more important than the sensors. Cameras are good enough. Lidar doesn’t keep Waymos from driving into an 18” deep puddle, or driving the wrong way down the street. Lidar doesn’t help predict when a pedestrian is going to try to cross the street. Lidar doesn’t give the car the common sense to slow down because a child just ran behind a parked car and will soon be coming out the other side.
Since lidar has distance information and cameras do not, it was always a ridiculous idea by a certain company to use cameras only. Lidar using cars are going to replace at least the ones that don't make use of this obvious answer to obstacle detection challenges.
Karpathy provided additional context on the removal of LiDAR during his Lex Fridman Podcast appearance. This article condenses what he said:
https://archive.is/PPiVG
And here's one of Elon's mentions (he also has talked about it quite a bit in various spots).
https://xcancel.com/elonmusk/status/1959831831668228450?s=20
Edit: My personal view is that LiDAR and other sensors are extremely useful, but I worked on aircraft, not cars.
Based on that list it boils down to 2 things it seems:
- cost (no longer a problem)
- too much code needed and it bloats the data pipelines. Does anyone have any actual evidence of this being the case? Like yes, code would be needed, but why is that innately a bad thing? Bloated data pipelines feels like another hand-wave when I think if you do it right it’s fine. As proven by Waymo.
Really curious if any Tesla engineers feel like this is still the best way forward or if it’s just a matter of having to listen to the big guy musk.
I’ve always felt that relying on vision only would be a detriment because even humans with good vision get into circumstances where they get hurt because of temporary vision hindrances. Think heavy snow, heavy rain, heavy fog, even just when you crest a hill at a certain time of day and the sun flashes you
Just for the record though, Musk isn't blindly anti-LIDAR. He has said (and I think this is an objective fact) that all existing roads and driving are based on vision (which is what all humans do). So that should technically be sufficient. SpaceX uses LIDAR for their docking systems.
I would argue that yes, we do use vision but we get that "lidar depth" from our stereo vision. And that used to be why I thought cameras weren't enough.
But then look at all the work with gaussian splatting (where you can take multiple 2d samples and build a 3d world out of it). So you could probably get 80% there with just that.
The ethos of many Musk companies (you'll hear this from many engineers that work there) is simplify, simplify, simplify. If something isn't needed, take it out. Question everything that might be needed.
To me, LIDAR is just one of those things in that general pattern of "if it isn't absolutely needed, take it out" – and the fact that FSD works so well without it proves that it isn't required. It's probably a nice to have, but maybe not required.
Humans aren't using only fixed vision for driving. This is such a tiresome thing to see repeated in every discussion about self driving.
You're listening to the road and car sounds around you. You're feeling vibration on the road. You're feeling feedback on the steering wheel. You're using a combination of monocular and binocular depth perception - plus, your eyes are not a fixed focal length "cameras". You're moving your head to change the perspective you see the road at. Your inner ear is telling you about your acceleration and orientation.
And also, even with the suite of sensors that humans have, their vision perception is frequently inadequate and leads to crashes. If vision was good enough, "SMIDSY" wouldn't be such an infamous acronym in vehicle injury cases.
For those of us not aware of Australian cycling jargon, "SMIDSY" means "Sorry, Mate, I Didn't See You".
the issue is clearly attention not vision when it comes to humans. if we could actually process 100% of the visual information in our field of view, then accidents would probably go down a shit load.
Humans have both issues. There are many human failures which are distinctly a vision issue and not attention related, e.g. misestimation of depth/speed, obscured or obstructed vision, optical focus issues, insufficient contrast or exposure, etc.
But how many of those crashes not caused by inattention could have been avoided with less idiocy and more defensive driving? I mean, yes, we can’t see as well in fog, but that’s why you should slow down
Again, I'm still not saying that humans don't make bad decisions. I'm saying that, unequivocally, they also get into accidents while paying attention and being careful, as a result of misinterpretation or failure of their senses. These accidents are also common, for example:
* someone parking carefully, misjudges depth perception, bumps an object
* person driving at night, their eyes failed to perceive a poorly lit feature of the road/markings/obstacles
* person driving and suddenly blinded by bright object (the sun, bright lights at night)
* person pulling out in traffic who misinterprets their depth perception and therefore misjudges the speed of approaching traffic
* people can only focus their eyes at one distance at a time, and it takes time to focus at a different distance. It is neither unsafe nor unexpected for humans to check their instruments while driving -- but it can take the human eye hundreds of milliseconds to focus under normal circumstances -- If you look down, focus, look back up, and focus, as quick as you can at highway speeds, you will have travelled quite a long distance.
These type of failures can happen not as a result of poor decision making, but of poor perception.
In theory, a computer should be able to do the same. It could do sensor fusion with even more sense modalities than we have. It could have an array of cameras and potentially out-do our stereo vision, or perhaps even use some lightfield magic to (virtually) analyze the same scene with multiple optical paths.
However, there is also a lot of interaction between our perceptual system and cognition. Just for depth perception, we're doing a lot of temporal analysis. We track moving objects and infer distance from assumptions about scale and object permanence. We don't just repeatedly make depth maps from 2D imagery.
The brute-force approach is something like training visual language models (VLMs). E.g. you could train on lots of movies and be able to predict "what happens next" in the imaging world.
But, compared to LLMs, there is a bigger gap between the model and the application domain with VLMs. It may seem like LLMs are being applied to lots of domains, but most are just tiny variations on the same task of "writing what comes next", which is exactly what they were trained on. Unfortunately, driving is not "painting what comes next" in the same way as all these LLM writing hacks. There is still a big gap between that predictive layer, planning, and executing. Our giant corpus of movies does not really provide the ready-made training data to go after those bigger problems.
Putting your point another way, in order to replicate an average human driver’s competence you would need to make several strong advancements in the state of the art in computer vision _and_ digital optics.
In India (among others), honking is essential to reducing crashes
We often greatly underestimate / undervalue the role of our ears relative to vision. As my film director friend says, 80% of the impact in a movie is in the sound
Most of what you said has nothing to do with lidar vs camera
20 meters away motion vision is more accurate than stereoscopic vision. What is lidar helping to solve here?
Waymo claims its system, which uses a combination of LIDAR & vision, resolves objects up to 500 meters away
https://waymo.com/blog/2024/08/meet-the-6th-generation-waymo...
This company claims their LIDAR works conservatively at 250m, and up to 750m depending on reflectivity
https://www.cepton.com/driving-lidar/reading-lidar-specs-par...
LIDAR also struggle in heavy rain, snow, fog, dust. Check how waymo handle such conditions.
It's not only failing, it's causing false positives.
> So that should technically be sufficient
Sufficient to build something close to human performance. But self driving cars will be held to a much higher standard by society. A standard only achievable by having sensors like LiDAR.
if a self driving car had the exact vision of humans it would still be better because it has better reaction times. never mind the fact that humans cant actually process all the visual information in our field of view because we dont have the broad attention to be able to do that. its very obvious that you can get super human performance with just cameras.
Whether thats worth completely throwing away LiDAR is a different question, but your argument is just obviously false.
Better reaction times only matter if the decisions are the same / better in every case. Clearly we are not there on that aspect of it yet.
Deciding to crash faster, or "tell human to take over" really fast is NOT better.
This reminds me of the time I was distantly following a Waymo car at speed on 101 in Mountain View during rush hour. The Waymo brake lights came on first followed a second or two later by the rest of the traffic.
Even if they weren’t going to be held to a higher standard for widespread acceptance, tens of thousands of people a year in the us die due to humans driving badly. Why would we not try to do better than that?
Because that's an acceptable loss and better costs more!
Sufficient if all else were equal. But the human brain and artificial neural networks are clearly not equal. This is setting aside the whole question of whether we hope to equal human performance or exceed it.
Teslas have at least 3 forward facing cameras giving them plenty of depth vision data.
They also have several cameras all around providing constant 360° vision.
To do gaussian splatting anywhere near in real time, you need good depth data to initialize the gaussian positions. This can of course come from monocular depth but then you are back to monocular depth vs lidar.
Mentioning gaussian splatting for why we don't need lidar depth is a great example of Musk-esque technobabble; surface level seemingly correct, but nonsense to any practitioner. Because one of the biggest problems of all SfM techniques is that the results are scale ambiguous, so they do not in fact recover that crucial real-world depth measurement you get from lidar.
Now you might say "use a depth model to estimate metric depth" and I think if you spend 5 minutes thinking about why a magic math box that pretends to recover real depth from a single 2D image is a very very sketchy proposition when you need it to be correct for emergency braking versus some TikTok bokeh filter you will see that also doesn't get you far.
This is not really true if you have multiple cameras with a known baseline, or well known motion characteristics like you get with an accelerometer+ wheel speed.
Why is this getting downvoted? It's good faith and probably more accurate than not.
> and the fact that FSD works so well without it proves that it isn't required
The reports that Tesla submits on Austin Robotaxis include several of them hitting fixed objects. This is the same behavior that has been reported on for prior versions of their software of Teslas not seeing objects, including for the incident for which they had a $250M verdict against them reaffirmed this past week. That this is occurring in an extensively mapped environment and with a safety driver on board leads me to the opposite conclusion that you have reached.
If Waymo proven their model works, why the silly automaker is doing several orders of magnitude more autonomous miles?
My understanding is that there's more data processing required with cameras because you need to estimate distance from stereoscopic vision. And as it happens, the required chips for that have shot up in price because of the AI boom.
But I think costs were just part of the reason why Elon decided against Lidar. Apparently, they interfere with each other once the market saturates and you have many such cars on the same streets at the same time. Haven't heard yet how the Lidar proponents are planning to address that.
How does Waymo handle it now? There are many videos of Waymo depots with dozens of cars not running into each other.
It's rare, but sometimes they do hit each other:
https://www.reddit.com/r/SelfDrivingCars/comments/1mdl5zn/tw...
https://www.reddit.com/r/waymo/comments/1pggtpu/two_waymos_m...
Lidar critics like to pretend that anti-collision is not a well-studied branch of Computer Science and telecoms. Wifi, Ethernet and cellphones all work well simultaneously, despite participants all sharing the same physical medium.
The points linked repeatedly focus on cost and complexity as justification, even explicitly stating musks desire to minimise components in Kaparthy’s list.
They don’t focus on safety or effectiveness except to say that vision should be ‘sufficient’. Which is damning with faint praise imho.
If that link was to try and argue that the removal of sensors makes perfect sense i have to point out that anyone that reads that would likely have their negative viewpoint hardened. It was done to reduce cost (back when the sensors were 1000’s) and out of a ridiculous desire by Musk for minimalism. It’s the same desire that removed the indicator stalk i might add.
To be clear, from a personal standpoint, I am pro-more sensors and sensor fusion.
I assume Musk, et al are acting in best faith in trying to find the right compromises.
Why would you assume Musk is acting in good faith? That’s very much not his thing.
Oh, you sweet summer child..
Instead of betting on RADAR and LIDAR HW getting better and cost going down, they went with vision only approach. Everybody in this field knows the strengths and weakness of each system. Multi-modal sensor fusion is the way to go for L4 autonomy. There is no other way to reduce the risk. Vision only will never be able to achieve L4 in all the weather conditions. Tesla may try to demonstrate L4 in limited geography and in good weather conditions but it won't scale.
The reasoning is cynical but sound. If the system uses only the sensing modes people have, it will make the mistakes people do. If a jury thinks "well I could have done that either!" You win. It doesn't matter if your system has fewer accidents if some of the failure modes are different than human ones, because the jury will think "how could it not figure that out?"
I don't think that's the reasoning.
The reasoning was simply that LIDAR was (and incorrectly predicted to always be) significantly more expensive than cameras, and hypothetically that should be fine because, well, humans drive with only two eyes.
Musk miscalculated on 1) cost reduction in LIDAR and 2) how incredible the human brain is compared to computers.
Having similar sensors certainly doesn't guarantee your accidents look the same, so I don't think your logic is even internally sound.
Sensor fusion is also hard to get right, since you still need cameras you have to fuse the two information streams. Thats mainly a software problem and companies like Waymo have done it, but Tesla was having trouble with it earlier, and if you don’t do it right, your self driving system can be less reliable.
Sensor fusion seems like it'd be a big problem when you're handcoding lots of C++, and way less of a problem when all the sensors are just feeding into one big neural network, as Tesla and probably others are doing now. The training process takes care of it from there.
One of Udacity's first courses was on self-driving, taught by Sebastian Thrun who later cofounded Waymo. He went through some Bayesian math that takes a collection of lidar points, where each point contributes to a probabilistic assessment of what's really going on. It's fine if different points seem to contradict each other, because you're looking for the most likely scenario that could produce that combined sensor data. Transformers can do the same sort of thing, and even with different sensor types it's still the same sort of problem.
> Sensor fusion is also hard to get right, since you still need cameras you have to fuse the two information streams
The response to the challenge shouldn't be whittling down your sensor-suite to a single type, but to get good at sensor fusion.
I think this is the key. In theory - more information stream when fused together (properly) should reduce error. If their stumbling block is the "properly" part, than the rest of those justifications come off as a pretty weak way to sidestep their own inabilities to deliver this properly.
We have lots of evidence of similar strategies being used in other domains, this seems like an especially life-critical domain that ought to have high rigor and standards applied.
> how incredible the human brain is compared to computers.
It is pretty incredible but people will (rightly so?) hold automated drivers to an ultra high standard. If automated driving systems cause accidents at anywhere near the human rate, it'll be outlawed pretty quickly.
> If automated driving systems cause accidents at anywhere near the human rate, it'll be outlawed pretty quickly.
This is evidently false. Robotaxi crash rates exceed human drivers', but there's not an effective regulatory agency to outlaw them!
https://futurism.com/advanced-transport/tesla-robotaxis-cras...
According to that article, Waymo crashes 2.3x more often than human drivers (every 98k miles vs 229k miles), which is clearly false. I think it's far more likely that humans don't report most minor collisions to insurance, and that both Robotaxis and Waymo are safer than human drivers on average.
> According to that article, Waymo crashes 2.3x more often than human drivers (every 98k miles vs 229k miles), which is clearly false.
Why is it clearly false? It might be false, but clearly? I would definitely like to see evidence either way.
> I think it's far more likely that humans don't report most minor collisions to insurance, and that both Robotaxis and Waymo are safer than human drivers on average.
That sounds like you are trying to find reasons to get the conclusion you want.
The NHTSA requires a report when any automated driving system hits any object at any speed, or if anything else hits the ADS vehicle resulting damage that is reasonably expected to exceed $1,000.[1] In practice, this means that everyone reports any ADS collision, since trading paint between two vehicles can result in >$1k in damage total.
If you go to the NHTSA's page regarding their Standing General Order[2] and download the CSV of all ADS incidents[3], you can filter where the reporting entity is Waymo and find 520 rows. If you filter where the vehicle was stopped or parked, you'll find 318 crashes. If you scan through the narrative column, you'll see things like a Waymo yielding to pedestrians in a crosswalk and getting rear-ended, or waiting for a red light to change and getting rear-ended, or yielding to a pickup truck that then shifted into reverse and backed into the Waymo. In other words: the majority of Waymo collisions are due to human drivers.
So either Waymos are ridiculously unlucky, or when these sorts of things happen between two human driven cars, it's rarely reported to insurance. In my experience, if there's only minor damage, both parties exchange contact info and don't involve the authorities. Maybe one compensates the other for damage, or maybe neither party cares enough about a minor dent or scrape to deal with it. I've done this when someone rear-ended me, and I know my parents have done it when they've had collisions.
If human driven vehicles really did average 229k miles between any collision of any kind, we'd see many more pristine older vehicles. But if you pay attention to other cars on the road or in parking lots, you'll see far more dents and scratches than would be expected from that statistic. And that's not even counting the damage that gets repaired!
1. See page 13 of https://www.nhtsa.gov/sites/nhtsa.gov/files/2025-04/third-am...
2. https://www.nhtsa.gov/laws-regulations/standing-general-orde...
3. https://static.nhtsa.gov/odi/ffdd/sgo-2021-01/SGO-2021-01_In...
Definitely. I looked at Tesla's source for these numbers, looks like they primarily used data sourced from police reports, which most people only file if the incident is serious enough to turn into insurance.
Tesla notes:
> These assumptions may contain limitations with respect to reporting criteria, unreported incident estimations (e.g., NHTSA estimates that 60% of property damage-only crashes and 32% of injury crashes are not reported to police
https://www.tesla.com/fsd/safety
Musk has never been scared of vertically integrating something that's too expensive initially.
> Musk miscalculated on 1) cost reduction in LIDAR
Given that Musk has a history of driving lower costs, it's unlikely he overestimated the long-term cost floor. He just thought we were close to self-driving in 2014.
Another factor is Andrej Karpathy, who was the primary architect for the vision-only approach. Musk wanted fewer parts, and Karpathy believed he could deliver that. Karpathy is still an advocate of vision-only.
Right, for the reasons that I just mentioned
> Musk miscalculated on 1) cost reduction in LIDAR and 2) how incredible the human brain is compared to computers.
And, less excusable, ignorant of how incredible human eyes are compared to small sensor cameras. In particular high DR in low light, with fast motion. Every photographer knows this.
And also ignorant about how those two eyes have binocular vision, adjustable positions, and can look in multiple mirrors for full spatial awareness.
There are good arguments but this isn’t one. Many humans (like me!) drive fine without binocular vision. And the cars have many cameras all around, with wide angle lenses that are watching everything all the time, when a human can only focus in one direction at a time.
I thought only the front view has binocular vision on the cars. The others are single, with no depth perception. How does it know how close objects are outside this forward cone?
https://www.researchgate.net/publication/378671275/figure/fi...
So your eye does not have an adjustable position and you cannot use mirrors?
Both are easily compensated for by having many cameras.
Binocular vision is not only relevant for driving (well, maybe for the steering wheel, but that's not the point).
It gives us depth perception. And moving the eyes and/or head gives the depth perception over a wide field of view.
Eh, I think ‘miscalculation’ might be giving too much credit about good intentions.
He wanted (needed?) to get on the hype train for self driving to pump up the stock price, knew that at the time there was zero chance they could sell it at the price point lidar required at the time - or even effective other sensors (like radar) - and sold it anyway at the price point that people would buy it at, even though it was not plausibly going to ever work at the level that was being promised.
There is a word for that. But I’m sure there are many lawyers that will say it was ‘mere fluffery’ or the like. And I’m sure he’ll get away with it, because more than enough people are complicit in the mess.
Miscalculation assumes there was a mistake somewhere, but near as I can tell, it is playing out as any reasonable person expected it too, given what was known at the time.
I think Musk is really not as smart as he thinks he is and this specific thing was probably an earnest mistake. Lots of other fraudulent stuff going on though of course!
IMHO not using lidars sounds like a premature optimisation and a complication, with a level of hubris.
This is a difficult problem to solve and perhaps a pragmatic approach was/is to make your life as simple as possible to help get to a fully working solution, even if more expensive, then you can improve cost and optimise.
Considering he also runs a company that puts computer chips inside brains to augment them you’d think he ought to have a more sound understanding as to the limits of both.
There certainly is a pretty on going miscalculation regarding human intelligence, and consrquentially, empathy.
Seeing the SOTA in FSD techs it is not obvious that Musk made a miscalc so far.
Nah
If the data were positive for Tesla, Tesla would publish it
They do not, so one can infer it is not flattering
(Before you post the "Miles driven with FSD" chart, you should know upfront (as Tesla must) that chart doesn't normalize by age of vehicle or driving conditions and is therefore meaningless/presumably designed to deceive)
Until a lawyer points out other cars see that. My car already has various sensors and in manual driving sounds alarms if there is a danger I seem not to have noticed. (There are false alarms - but most of the type I did notice and probably should have left more safety margin even though I wouldn't hit it)
also regulators gather srastics and if cars with something do better they will mandate it.
Very recent issue with Waymo https://dmnews.co.uk/waymo-robotaxi-spotted-unable-to-cross-.... This is 17 years after they bet the farm on LIDAR, with no signs its ever going to be cost effective or that it's better than multiple cameras, with millisecond reaction 360 degrees, that never gets tired, drunk, distracted, and also has other cheaper sensors and NN trained on Billions or real world data.
Tesla does not handle rain well either. This is not a LIDAR problem, it is a problem with self driving cars in general.
My Tesla can't even tell if it should turn the wipers on consistently or correctly. Let alone drive in the rain.
A feature that is bulletproof in other cars with a very boring and industry standard sensor (it's not even expensive), while Tesla insisted they could do it with just normal cameras.
Seriously. Why do people think a company that can't do automatic wipers could possibly do automatic driving?
The same people that seriously thought we’d have a mars base by now.
People also don't handle rain well.
That's an example of it failing safe. I'd rather it did that than drive me into a sinkhole because it thought it was a puddle.
Ok so Waymo is useless in the rain then, kind of limiting. But at least that 0.000000000001% times it actually is a sinkhole you won't damage the bumper.
I'd rather a Waymo be useless in the rain rather than a Tesla be actively dangerous and likely to kill me.
Tesla ""autopilot"" fatalities: 65
Waymo fatalities: 0
Autopilot isn’t full self driving (FSD), most cars these ship with smart cruise control (what autopilot basically is). Do you have fatality statistics for FSD?
If we are just talking about smart cruise control, most cars are using cameras and radar, not lidar yet. But Tesla is special since it doesn’t even use radar for its smart cruise control implementation, so that could make it less safe than other new cars with smart cruise control, but Autopilot was never competing with Waymo.
> Waymo fatalities: 0
By some measures Waymo is actually at -1 fatalities. There has been one confirmed birth of a child in a Waymo. https://apnews.com/article/baby-born-waymo-san-francisco-6bd...
I think the car would have to be more actively involved in the process for that to count. :)
There is also a report from the same flooding in LA of a Waymo driving into a flooded road and getting stuck.
They might have flipped a switch after that, causing this.
Dude that's not a 'puddle' as the article claims, that's a body of water that it's not even visually obvious whether it's safe to drive through. Maybe I'm a bad driver but I'd hesitate to drive through that in a small car either.
I think the difference is the prior knwoldege a commuter has of that section of road. Does it always flood shallowly in heavy rain?
Even without prior knowledge, seeing others safely navigate the same section will lower your estimated risk.
The amount of water will depend on the rain, so we don't know how shallow it is even with prior knowledge.
If you drive the road every day, you probably do. If you can see someone drive through it (perhaps someone who knows the area well and knows how deep it is based on puddle width), you definitely do.
>A vehicle got stuck trying to figure out an obstacle so sensors with less information are better than sensors with more information.
It is sound to think that cameras plus an accelerometer, plus data about about the car and environment (that you get from your ears) ought to be able to mimic and improve on human driving. However humans general purpose spatial awareness and ability to integrate all kinds of general information is probably really hard to replicate. A human would realize that an orange fluid spilling across the road might be slippery, guess the way a person might travel from the way their eyes are pointing...
It may just be faster to make lidar cheap. And lidar can do things humans can't.
IIUC, the cameras in a Tesla have worse vision (resolution) at far distances than a human. So while in the abstract your argument sounds fine; it'll crumble in court when a lawyer points out a similar driver would've needed corrective lens.
This is a new and flawed rationale that I haven't heard before. Tesla cameras are worse (lower resolution, sensitivity, and dynamic range) than human eyes and don't have "ears" (microphones).
The cars do have at least one microphone.
Inside the car though, right? With multiple exterior microphones they could do spatialization like Waymo.
Most accidents happen because people are human, aren't paying attention, are inebriated, not experienced enough drivers, or reckless.
It's not fair to say that vision based models will "make the same mistakes people do" as >99% of the mistakes people make are avoidable if these issues were addressed. And a computer can easily address all those issues
Which means the mistakes vision-based models for today are unique to them.
Pretty hard to do if your whole selling point is ‘better and safer than human’ however?
As I understand, lidars don't work well in rain/snow/fog. So in the real world, where you have limited resources (research and production investment, people talent, AI training time and dataset breadth, power consumption) that you could redistribute between two systems (vision and lidar), but one of the systems would contradict the other in dangerous driving conditions — it's smarter to just max out vision and ignore lidar altogether.
> lidars don't work well in rain/snow/fog.
Neither do cameras, or eyeballs.
When it's not safe to drive, it's not safe to drive.
I've been in zero-road-speed whiteout conditions several times. The only move to make is to the side of the road without getting stuck, and turning on your flashers.
Low-light cameras would not have worked. Sonar would not have worked. Infrared would not have worked.
I think the weather where cameras/sensors start having problems is much better than zero-vis whiteout.
If we could make sensors that lets an autonomous vehicle drive reliably in any snow/rain where a human could drive (although carefully) then we're good. But we are a long way from that. Especially since a lot of sensor tech like cameras tend to fail in 2 ways, both through their performance being worse in adverse condition but also simply failing to function at all if they are covered in ice/snow/water.
Radar might still have worked
https://en.wikipedia.org/wiki/L_band
If you have multi-return lidar, you can see through certain occlusions. If the fog/rain isn't that bad, you can filter for the last return and get the hard surface behind the occlusion. The bigger problem with rain is that you get specular reflection and your laser light just flies off into space instead of coming back to you. Lidar not work good on shiney.
No, it isn't "smarter." Camera-only driving is the product of a stubborn dogmatic boss who can't admit a fundamental error. "Just make it work" is a terrible approach to engineering.
Can hatred of Musk not derail this entire thread please? I have a camera-only ADAS that I think works quite well, but having both would be better.
Criticism of Musk isn't hate of Musk. The point is completely valid and the results of this management style infuses all of his businesses albeit with differing results.
It's significant that a truly hard problem like autonomous driving doesn't respond to a "brute force" management style. Rockets aren't in this category because the required knowledge and theory is fairly complete, whereas real autonomous driving is completely novel.
Shoe, meet foot.
I don't know what that means
https://en.wiktionary.org/wiki/if_the_shoe_fits,_wear_it
Oh, that's silly. I don't own a Tesla. I just wanna talk about LIDAR without people ragebaiting about Elon.
> without people ragebaiting about Elon
Hmm. Is it ragebaiting to respond to a tired and wrong statement by saying that it's tired and wrong and that the situation is merely the product of piss poor management decisions? People get understandably frustrated seeing the same wrong talking point that people with domain knowledge in computer vision and robotics have repeatedly explained is wrong in extremely fundamental ways.
> I don't own a Tesla.
n.b. The shoe/foot comment was not about you. It was about Musk. It wouldn't make any idiomatic sense for the expression to be about you given what you said and what you were responding to. If they'd said "pot, meet kettle", then it would have been about you. In that context, saying that you don't own a Tesla feels like a weird thing for you to insert in your comment. It potentially comes across as suspiciously defensive.
suspiciously defensive??? you got me. Or maybe I just didn't understand their comment.
Why does this matter? You have to slow down in rain/snow/fog anyway, so only having cameras available doesn't hurt you all that much. But then in clear weather lidar can only help.
If your vision is good enough to drive in rain/snow/fog, you don't need lidar in clear conditions. If you planned to spend $10B on vision and $10B on lidar — you would be better off spending $20B on better vision.
We have actual proof this isn’t true. Waymo is light years ahead of Tesla despite spending less.
Tesla is spending upwards of $6B/year to Waymo’s $1.5B. Only one of these companies makes an autonomous robotaxi that’s actually autonomous.
Yes, but how much of that is due to the lidar vs camera choice?
It still infuriates me that Tesla went so long being able to call their feature “auto pilot.“ Then they had the audacity to call it user error when people thought the car would automatically pilot itself.
> If yo[u can] drive in rain/snow/fog, you don't need lidar in clear conditions
Of course you do, you're driving at much higher speeds and so is the surrounding traffic. You can't just guess what you might be looking at, you have to make clear decisions promptly. Lidar is excellent in that case.
Nothing works perfectly in all conditions and scenarios. Sensor fusion has been the most logical approach now, and into the foreseeable future.
Computer vision does not work exactly like human vision, closely equating the two has tended to work out poorly in extreme circumstances.
High performance fully automated driving that relies solely on vision is a losing bet.
Why does that strategy absolutely require the lidar to be absent from the car? When was less technology the solution to a software problem?
People who don't understand that sensor fusion is an entire field of study with tons of existing work and lots of expertise have been fooled by a fake argument of "If the camera and lidar disagree, what do you do?"
It's frustrating to still see it repeated over a decade later. It was always bullshit. It was always a lie.
Limited resources? Billions per year are being thrown at the base technology. We have the capital deployed to exhaust every path ten times over.
Even if so, it doesn't mean that capital deployment efficiency and expected payoff make equal sense in all directions.
Then again, it's good that we have self-driving companies with lidar and without — we will find out which approach wins.
We have already found out, Waymo is SAE Level 4, Tesla is SAE Level 2
The Swiss cheese model would like to disagree.
When you have sensor ambiguity sounds like the perfect time to fail safely and slow to a halt unless the human takes over.
Do cameras work well in those conditions? Nope. Also cameras don't work well with certain answer of glare, so as a consumer I'd rather have something over-engineered for my safety to cover all edge cases...
Evidence clearly shows otherwise.
Also, military sensor use shows the best answer is to have as many different types of sensors as possible and then do sensor fusion. So machine vision, lidar, radar, etc.
That way you pick up things that are missed by one or more sensor types, catches problems and errors from any of them, and end up with the most accurate ‘view’ of the world - even better than a normal human would.
It’s what Waymo is doing, and they also unsurprisingly, have the best self driving right now.
This is silly. Cameras are cheap. Have both. Sensors that sense differently in different conditions is not an exotic new problem. The kalman filter has existed for about a billion years and machine learning filters do an even better job.
Cameras are cheap, but, as I understand:
1) it's not cheap to produce lidars at a stable predictable quality in millions;
2) car driving training data sets for lidars are much scarcer (and will always be much scarcer due to cameras' higher prevalence) and at a much lower quality;
3) combined camera+lidar data sets are even scarcer.
Doesn’t that make it a sensible long term play to equip your car with $200 LIDAR and start gathering that data as a competitive advantage?
Yeah, this is all about Musk not wanting to admit he was wrong.
1. Automotive LiDAR is down to $350 in China already. BYD is starting to put LiDAR in even entry level cars. (It's been in their mid and high end cars for a while).
2+3. BYD collects extensive training data from customers, much like Tesla does. They will have no trouble with training.
> Since lidar has distance information and cameras do not, it was always a ridiculous idea by a certain company to use cameras only
Human eyes do not have distance information, either, but derive it well enough from spatial (by ‘comparing’ inputs from 2 eyes) or temporal parallax (by ‘comparing’ inputs from one eye at different points in time) to drive cars.
One can also argue that detecting absolute distance isn’t necessary to drive a car. Time to-contact may be more useful. Even only detecting “change in bearing” can be sufficient to avoid collision (https://eoceanic.com/sailing/tips/27/179/how_to_tell_if_you_...)
Having said that, LiDAR works better than vision in mild fog, and if it’s possible to add a decent absolute distance sensor for little extra cost, why wouldn’t you?
Human/animal vision uses way more than parallax to judge distances and bearings - it uses a world model that evolved over millions of years to model the environment. That's why we can get excellent 3D images from a 2D screen, and also why our depth perception can be easily tricked with objects of unexpected size. Put a human or animal in an abstract environment with no shadows and no familiar objects, and you'll see that depth perception based solely on parallax is actually very bad.
Human eyes are much better than cameras at dealing with dynamic range. They’re also attached to a super-computer which has been continuously trained for many years to determine distances and classify objects.
> Human eyes do not have distance information
Single human eyes do resolve depth perception. Not as good as binocular vision, but you don't loose all depth perception of you lose an eye.
https://en.wikipedia.org/wiki/Monocular_vision
I don’t like the comparison between humans and humans. Humans don’t travel around at 100mph in packs of other humans. Why not use every sensor type at our disposal if it gives us more info to make decisions? Yes I understand it’s more complicated, but we figure stuff out.
Let me know when you have a camera package with human eye equivalency.
It's not that simple. Cameras don't report 3D depth, but these AI models can and do pick up on pictorial depth cues. LiDAR is incredibly valuable for collecting training and validation data, but may also make only an insignificant difference in production inference.
Stereo cameras? My 2015 Subaru has them to detect obstacles and it works great.
Yea, even in the case they could match human level stereo depth perception with AI, why would they say "no" to superhuman lidar capabilities. Cost could be a somewhat acceptable answer if there wouldn't be problems with the camera only approach but there are still examples of silly failures of it. And if I remember correctly they also removed their other superhuman radar in their newer models, the one which in certain conditions was capable of sensing multiple cars ahead by bouncing the signal below other cars.
Because they don't have superhuman LIDAR. They never did. Nobody ever did. LIDAR input is not completely reliable so what do you do then?
https://en.wikipedia.org/wiki/Sensor_fusion
I'm not an expert on ML vision, but I do have a Tesla and it seems to be able to tell how far away things are just fine. I'm not sure what would be wrong with the vision system that lidar needs to fix.
The phantom braking issue with auto pilot tells me it can’t. A shadow from a tree doesn’t trigger your brakes locking up at 70+ mph when there’s a lidar sensor to tell you it’s not a physical object.
“Just buy FSD” isn’t a reasonable answer to a problem literally no other automaker suffers from.
Stopped using autopilot because of the phantom braking.
It's also recently gotten much worse at lane departure sensing, often confused by snow or slightly faded road markers. Not pleasant to have the alarms go off while calmly and safely driving.
How do you explain the reports of Robotaxis running into fixed objects? If what you are saying is true that shouldn't be able to happen.
https://electrek.co/2026/02/17/tesla-robotaxi-adds-5-more-cr...
Luckily everyone else in the comments is an expert. And also doesn't recognize that Tesla's already drive themselves and did not need Lidar. They also mischaracterize the reasoning.
> I'm not sure what would be wrong with the vision system that lidar needs to fix.
This conversational disconnect is as old as the hills:
1. Person 1 asks "what's wrong" (if it ain't broke don't fix it)
2. Person 2 wants to make something better
My meta-goal here on HN (and many places where people converse) is for people to step back and recognize the conversational context and not fall into the predictable patterns that prevent us from making sense of the world as best as we can.
Yeah it's BS. Tesla uses lidar where it makes sense: They have a small lidar fleet to collect ground truth depth data for better vision estimation. This part is long solved.
Humans don't have explicit distance sensors either. When LIDAR sensors were $20k+ I think it made a lot of sense to avoid them.
Just say Tesla, why censor yourself.
I have a suspicion here on HN. When criticizing big tech, especially Google and FB, at a certain time of the day a specific cohort comes online and downvotes. Suspiciously, that is a time when one could conclude, that now people in the US start working or come online. Either fanboys, employees or an organized group of users trying to silence big tech criticism.
I have no proof of course and it might be coincidence, or just difference of mindset between US citizens and Europe citizens. It happened a few times already and to me looks sus.
But if they actually read and not just ctrl+f <company name>, then of course not writing the company name, but hinting at it in an obvious way is no more helpful either.
I have seen this happening multiple times, some to fairly reasonable comments with a just tiny negative tone.
There is also flagging abuse which effectively kills the comment /post.
I know for a fact at least 1 bigger US company has a bot in slack that brings up any mentions of $companyname on hackernews...
It's been my experience that hn and reddit have a very high overlap in audience these days. The jerrybreakseverything crowd. Anything anti-tesla, anti-grok, is applauded.
Yeah, I agree with GP, pretty much anything that isn't effusively praising tesla or elmu etc will tend to get reflexively downvoted.
considering cameras can create reliable enough distance measurements AND also handle all the color reception needed for legally driving roads it was always a ridiculous idea by a certain set of people that lidar is necessary.
No, cameras cannot create reliable distance measurements in real-world conditions. Parallax is not a great way to measure distance for fast, unpredictably moving objects (such as cars on the road). And dirt or misalignment can significantly reduce accuracy compared to lab conditions.
Note that humans do not rely strictly on our eyes as cameras to measure distances. There is a huge amount of inference about the world based on our internal world models that goes into vision. For example, if you put is in a false-perspective or otherwise highly artifical environment, our visual acuity goes down significantly; conversely, people with a single eye (so no parallax-based measurement ability) still have quite decent depth perception compared to what you'd naively expect. Not to mention, our eyes are kept very clean, and maintain their alignment to a very high degree of precision.
I don't think they meant literally cameras only can create reliable distance measures. At the risk of putting words in their mouth, I would guess they meant "cameras as the only input to a distance model". the "model" doing all the heavy lifting, covering the points that you quite rightly point out are needed
Several companies, most notably Tesla, have done this well enough to drive in all manner of traffic. I'm not going to comment about if lidar is strictly needed or not to achieve better-than-human safety, that's yet to be proven one way or another by anyone. The point is that cameras + local inference can do a pretty good job at distance estimation
Stereo cameras are useless against repeating patterns. They easily match neighboring copies. And there are lots of repeating or repeating-like patterns that computers aren't smart enough to handle.
You can solve this by adding an emitter next to the camera that does something useful, be it just beaconing lights or noise patterns or phase synced laser pulses. And those "active cameras" are what everyone call LIDARs.
There are tons of evidence showing that cameras are alone are not safe enough and even Tesla has realized that removing lidar to save cost was a mistake.
'cameras can see in color, therefore lidar is unnecessary for self driving' is unconvincing
> ridiculous idea by a certain set of people that lidar is necessary.
"Necessary"? Seems like a straw man, don't you think? I strive to argue against the strongest reasonable claim someone is making.
Lots of reasonable people suggest LIDAR is helpful to fill in gaps when vision is compromised, degraded, or less capable.
People running businesses, of course, will make economic trade-offs. That's fine. But don't confuse, say, Elon's economic tradeoff with the full explanation of reality which must include an awareness that different sensors have different strengths in different contexts.
So, when one thinks about what sensor mix is best for a given application, one would be wise to ask (and answer) such questions as:
- What is the quality bar?
- What sensors are available?
- Wow well do various combinations of sensors work across the range of conditions that matter for the quality bar?
- WRT "quality bar": who gets to decide "what matters"? The company making the cars? The people that drive them? regulators that care about public safety. The answer: it is a complex combination.
It is time to dismiss any claim (or implication) that "technology good, regulation bad". That might be the dumbest excuse for a philosophy I've ever heard. It is the modern-day analogue of "Brawndo's got what plants crave." Smart people won't make this argument outright, but unfortunately, their claims sometimes reduce to this level of absurdity. Neither innovation nor regulation are inherently good nor bad. There are deeper principles in play.
Yes, some individuals would use their self-proclaimed freedom to e.g. drive without seatbelts at 100 mph at night with headlights off. An extreme example, but it is the logical extension of pure individualism run amok. Regulators and anyone who cares about public safety will draw a line somewhere and say "No. Individual stupidity has a limit." Even those same people would eventually come to their senses after they kill someone, but by then it is too late.
It's not complicated. LIDAR hardware was in short supply during COVID. Elon obviously couldn't slow down production and sink the inflated stock price.
April 2019: https://www.youtube.com/live/Ucp0TTmvqOE?t=9220s
There are probably even earlier statements from him against lidar...
WTF was their calculus on the break-even liability point? The "if we do this, we save X amount of money, but stand to lose Y in lawsuits for cases where the usage of LIDAR could have otherwise prevented it."
All of driving is designed for visual.
TIL roads don't have rumble strips
I find it comical that people continue to go back to this rage well against "a certain company" for their vision-only approach when the truth is they have the best automatic driving system an individual can buy, rivaling Waymo and beating the Chinese brands.
Why are the commenters not pissed at the dozens of other car companies who have done absolutely nothing in this space? Answer: because it's not nearly as fun to be pissed at Kia or Mercedes or whoever. Clearly they are just enjoying the shared anger, regardless of whether it is justified.
Because other car companies don't have CEOs who've been super confident about predicting actual full self driving either "this year" or "next year" for the past decade. If Ford had been swearing up and down they'd have full self driving cracked any day now for ten years, and been charging people for the hardware along the way, everyone would be pissed at them too.
Surely you already know this, so why pretend otherwise?
1. Tesla is not competitive with Waymo, they're not even in the same class. Waymo is 10 years ahead at least. I understand you can't buy a Waymo, but still.
2. Other car companies are properly valued, Tesla is overinflated.
3. Other cars, even basic Hondas, have the same level of self driving as Teslas.
4. Other car companies don't lie to their customers about their capabilities or what they're buying.
> Other cars, even basic Hondas, have the same level of self driving as Teslas.
This is not true at all. Don't confuse lane assist with self driving. And yes I'm aware people are upset by the "Autopilot" product name they chose for lane assist.
You're way off if you think that Waymo and FSD are anywhere close.
There is certainly some truth that "some company" overpromised and underdelivered. They advertise "full self driving" but then hide in the fine-print that "oh jk, not really, but its still full self driving if anyone asks ;) ;) ;)"
I think the frustration stems from the obvious falsehoods in the advertising, and the doubling-down on the tech, despite the well-documented weaknesses of the implementation.
Have you driven in Tesla FSD recently? If anything it’s undersold. It’s an absolute miracle. I use it everyday.
Please be courteous to other drivers on the road, we all share it. Just make sure you’re the one in charge, not the software. This isn’t to put your argument down, but to offer the perspective of people involved in accidents. Loss of life is bad, but surviving accidents is also equally bad.
Certain company has 300k subscribers that rely on that ridiculous service.
My father lost vision in 1 eye and 50% in other one something like 20 years ago. He struggles in parking but otherwise doing ok without lidar. Turns out motion vision is more accurate after 10-20 meters than stereoscopic vision.
I wouldn’t take too much issue with the “cameras are enough” claim if cameras actually performed like eyes. Human eyes have high dynamic range and continuous autofocus performance that no camera can match. They also have lids with eyelashes that can dynamically block light and assist with aperture adjustment.
The appeal to human biology and argument against fusion between disparate sensors kinda falls flat when you’re building a world model by fusing feeds from cameras all around the car. Humans don’t have 8 eyes in a 360 array around their head. What they do have is two eyes (super cameras) on ~180 degree swiveling and ~180 degree tilting gimbal. With mics attached that help sense other vehicles via road noise. And equilibrioception, vibration detection, and more all in the same system, all fused. If someone were actually building this system to drive the car, the argument based on “how did you drive here today?” gets a lot stronger. One time I had some water blocking my ear and I drove myself to the hospital to get it fixed. That was a shockingly scary drive — your hearing is doing a lot of sensing while driving that you don’t value until it’s gone.
One camera can't really produce depth/distance information, but two cameras sure can. The eyes in your head don't capture distance information individually, but with two eyes you can infer distance.
You're forgetting the nervous system and the brain connected to those eyes (and vestibular system).
I'll preface by saying lidar should be used with autonomous vehicles.
Individual cameras don't have distance information, but you can easily calibrate a system of cameras to give you distance information. Your eyes do this already, albeit not quantitatively. The quantitative part comes from math our brains aren't setup to do in real time.
It was cost wasn't it?
If this lowers Lidar costs, and Tesla has spent all this time refining the camara technology. Now have both.
Use both.
It was a great decision to drop LiDAR. The cars are running excellently without it
Why make things more complicated than they need to be? Humans don't have lidar and we are the only intelligence that can reliably drive. Lidar just seems like feature engineering, which has proven to be a dead end in most other AI applications (bitter lesson).
https://www.cs.utexas.edu/~eunsol/courses/data/bitter_lesson...
> Why make things more complicated than they need to be? Humans don't have lidar and we are the only intelligence that can reliably drive.
Because we want self driving cars to be safer than human driven cars.
If humans had built in lidar we would use it when driving.
Read the comment again. It's not that vision is "good enough", it's that feature engineering doesn't work
Self driving cars are not equipped with human brains so this doesn’t really make sense.
“We should achieve self driving cars via replicating the human brain” strikes me as an incredibly inefficient and difficult way to solve the problem.
Then you deeply underestimate how difficult the problem is, and deeply misunderstand where all the effort has been spent in developing autonomous vehicles.
If all the effort has been spent in trying to replicate the human brain then I am comfortable saying that is a mistake.
We have a tool that can tell with great accuracy how far away an object is. The suggestion that we should ignore it and rely on cameras that have to guess it because “that’s how humans work” is absurd, frankly.
> we are the only intelligence that can reliably drive.
Science would like to point out that rats also can learn to drive
https://theconversation.com/im-a-neuroscientist-who-taught-r...
yeah but not reliably, they often totally space on their commitments to pick you up from the airport, etc
If you had to choose between picking someone up at the airport or dragging a slice of pizza twice your size down the NYC subway stairs, what would YOU do?
The bitter lesson I think is a great way of explaining the logic behind Tesla's strategy. People aren't getting it.
Whether or not it'll actually work remains to be seen, but it's a perfectly reasonable strategy. One counterargument would be that the bitter lesson can be applied to LIDAR too; you don't have to use that data for feature engineering just because it seems well suited for it.
Humans can drive with eyes only, but we are better drivers when we can also use other senses like hearing. If humans has lidar we would use it when driving.
Don't cars already use a ton of sensors that don't reproduce human senses and ways of doing things?
This knee-jerk reply is old and tired, and the counterarguments are well-trod at this point. Even if cameras-only can build a car that’s as good as humans, why should we settle for “as good as“ humans, who cause 40,000 fatalities a year in the US? If we can do better than humans with more advanced sensors, we are practically morally obligated to do that.
Yes! The smart and nuanced panoply of replies to the GP are a wonderful counterbalance to people "just saying things that pop into their head" -- which is unfortunately how I view a lot of human speech nowadays :/
I worked in mobile robotics for a defense contractor in the early-mid 2000s and we had a homegrown Lidar (though we always called it ladar) that was large, heavy, and cost $250k to make. I remember a year out of college driving 2 hours to a military base with it in the backseat of my car and being paranoid at every bump I hit. These things seem like a dream.
Before y'all say that now everyone will be able to get Waymo's sensor suite for hundreds of dollars instead of tens of thousands, that's the easy part.
Waymo benefits from Google's unparalleled geospatial data. Waymo also has a support architecture that doesn't depend on real time remote operation, which can't be implemented reliably in almost all cases. You can't be following your supposedly unsupervised cars with a supervisor in a chase car. You can't even be driving remotely. Your driver software has to be able to drive independently in all cases, even those where it needs to ask a human how to proceed.
The difference between level two and level three driver assist and level four autonomy is like the difference between suborbital flight and putting a payload in orbit. What looks like a next logical step actually takes 10X or more effort, scale, and testing.
I’m not disagreeing with what you’re saying, but does Alphabet actually intend Waymo to be a trillion dollar retail car business itself, selling cars to everyone? Or would they be happy to sell all those super cool things to OEMs? In a world where “everyone” can make a car affordable that can run Waymo’s software, they may be happy to license all that to “everyone” and simply collect fat royalty checks, à la Microsoft in the 90s, allowing them to make a ton of incremental money without all the capex of making their own cars.
They'll probably operate some services and also license their tech to carmakers to sell to consumers. I'm sure there'll be a subscription involved for that too.
In a saner world Teslas would be running Waymo's self-driving stack instead of the half-baked "might kill you at any time" not quite-FSD.
They are not the same. I don't think Tesla or its consumers are interested in geofenced self driving, they want to be able to use it on road trips and driving around suburbs.
I think that was implied in my comment: that Tesla would use Waymo's stack for free navigation and self-driving, instead of not quite-FSD.
That's true, Waymo has true Level 4 Automation, and Tesla Customers delude themselves about the Capabilities of their Level 2 System and endanger others for some clout videos
I mean, Tesla gave up on quality self-driving many years ago when Elon went hard against LIDAR. He's never relented, either, and I don't foresee that changing.
That is one plausible outcome. Waymo is experimenting with partnerships with ride hailing apps on the one hand, and building their software into Toyotas on the other hand. So far they have built a few thousand vehicles in a factory run by Magna, which specializes in low volume vehicles. Hyundai wants to sell Waymo tens of thousands of vehicles. That's going to look different in fundamental ways.
It would be smarter to take that approach. Google's core competency is technology, technical infrastructure, and research. More mundane things like manufacturing and customer service are... shall we say, less of a core competency. Take the high value add, leave other things to automakers to duke it out. Also good for avoiding attracting even more regulatory attention.
Why sell cars to everyone?
People on here used to buy servers themselves (very few of us still do), most now rent via cloud.
Why should transportation be different?
>>Why should transportation be different?
Good question, and for many it will not be, and rentals are acceptable.
But also for many, renting a car has a huge ICK factor. It is one thing while traveling to rent from an agency who has (purportedly) thoroughly cleaned and inspected the car before you get it. It would be quite another to rent cars like scooters, where the previous user likely smoked, left wrappers and food debris, and who knows what else, even damage. Plus, most people who own cars keep a fair amount of stuff in the car for their specific convenience, and have their own settings, etc.
The fact that the likes of Zipcar, Turo, and the lot have not entirely taken over urban transport but instead remain niche players shows the extent of this preference.
For suburban and rural markets, it just gets more extreme. How quickly could a rental service be able to deliver a car; could it reliably do it in less than 5-10 minutes for people to run an errand? If not, unless they are insanely cheap, ppl will likely want to own their own. Perhaps it'll be more of a hybrid, households owning one car and renting the spare for specific trips?
I think it's more comparable to Uber or Lyft. Some passengers may actually prefer to not have a driver chatting with them.
If you use them regularly, renting is both a pain in the ass and quite costly. If you have atypical security (or even normal, in many cases) or usage patterns, it’s even worse.
A lot of folks are relearning lessons on this front in Cloud right now.
Not a good analogy: a server is not a personal space occupied by humans. It's for the same reason people don't want to hot-desk; they prefer a personal space with their own stuff in it.
Moores law applies to cpus not the car that has been functionally the same for decades.
With the price declines in ev we are talking about 1 million ev even with all the waymo tech for $50 billion soon. approximate Annual Revenue of a private hire car is $50+k ie $50-60 billion a year for a million cars. But total taxi driver population is 350-400k in the US. I think people are underestimating the electric tech + ai/automation to hit soon.
Alphabet wants drivers on their devices looking at ads instead of driving.
Do OEMs want to manage their own ride-share platforms? 10+ apps/providers?
I think they were referring to making personal vehicles self driving. Probably the rideshare market is just the start for Waymo.
> but does Alphabet actually intend Waymo to be a trillion dollar retail car business itself, selling cars to everyone?
Google doesn't do retail other than Chromecast and Pixel phones, and that is already annoying to them as it is because it involves something Google is notoriously bad at - actual customer support.
Starting up a car brand is orders of magnitude worse.
For one, people actually need to trust your brand to survive for at least five to ten years - cars are an investment, and a car that I can't trust to get safety-relevant spare parts (brake rotors, brake pads, axle bearings) all of a sudden is essentially an oversized paperweight. For a company such as Google, this alone (remember Killed By Google) is a huge obstacle to overcome.
Then, you need production. Sure, you can go to Magna or other contract manufacturers, or have an established large brand build vehicles for you, or you say you have to go the Tesla route and build everything from scratch. Either way has associated pros and cons.
And then, you need a nationwide network of spare parts, dealerships, repair shops and technicians that can fix the issues that people will get alone because the wide masses abuse cars in ways you might not even dare think about while testing, or because other people run into your cars and so your cars need repairs.
Even being a derivative of an established car brand can be a royal PITA. Let's take Mercedes Benz as an example with the 2003-2009 Mercedes-Benz SLR McLaren. On paper, it's a Mercedes vehicle, with a lot of the parts actually originating from stock Mercedes cars - but most dealerships will refuse to work on it. Either because they lack the support to even properly jack the car up, or because they lack the specialized tools for the AMG engine, or because they cannot even order the parts as Mercedes gates repairs for that thing to special shops. Or, again Mercedes, with Maybach luxury cars. The situation isn't as bad as with the McLaren, but their cars are challenging in another way - the S 650 Pullman weighs around 3 metric tons empty and is 6.50 meters long. Good luck finding a jack even capable of lifting that beast, most Mercedes sports-car shops don't carry jacks that are normally used to lift Mercedes Vito transporters!
Even Tesla, and they've been at it for the better part of two decades, still struggles with that. Their shitty spare parts logistics actually drive up not just insurance prices for their own customers, but for everyone - hit a Tesla with your Dodge and be at fault, and now your insurance has to pay out for months of a rental car because Tesla can't be arsed to provide the body shop the Tesla ends up at with spare parts in any reasonable time.
Established car brands however have all of that ironed out for many, many decades now. American, Asian, European, doesn't matter. And the spare parts don't even have to be made for cars: ask your local Volkswagen dealer to order a few pieces of "199 398 500 A" and one piece of "199 398 500 B" and you'll probably have a lead time of less than a day, at least in Germany - for the uninitiated: that part number belongs to the famous sausage, the second one to the accompanying curry ketchup, with more sausages being sold each year than actual cars.
And established car brands also bring something to the table: their own experiences with integrating smart technology. Yes, particularly German carmakers are notoriously bad in that regard, but for example Mercedes Benz was the first car brand in the world to get a certified Level 3 system on the road [1] and are now working on a Level 4 certification [2]. That kind of experience in navigating bureaucracy, integration and testing cannot be paid for in money.
tl;dr: I see no way in which Waymo goes to general availability regarding selling cars. They will run their own autonomous car fleets in select markets where they can fully control everything, but seeing Waymo tech generally available will be as part of established car brands.
[1] https://group.mercedes-benz.com/technologie/autonomes-fahren...
[2] https://group.mercedes-benz.com/technologie/autonomes-fahren...
> For one, people actually need to trust your brand to survive for at least five to ten years - cars are an investment, and a car that I can't trust to get safety-relevant spare parts (brake rotors, brake pads, axle bearings) all of a sudden is essentially an oversized paperweight.
Those bits should be easy, unless the OEM was tragically stupid. Where you'll get into trouble is when you need replacement computer bits; those are often tricky for mainstream brands, but if your niche brand ECUs all fail around the same time (wouldn't be the first time for a Google product), and the OEM isn't around to make new ones or make it right, off to the junkyard with all of them. If it's just normal failure rates, you can probably scavenge from totaled vehicles at junkyards even after new parts become unobtainium.
OEM style lighting will also probably get hard to find. Ideally a niche maker would lean towards standard parts there, but that's not the fashion of the times.
> Those bits should be easy, unless the OEM was tragically stupid.
Well... just look at Tesla. A lot of their parts don't come from the classic supplier-OEM delivery chain model, but Tesla makes as much as they can on their own. It saves them a bunch of money, both when it comes to the profit margin of the supplier, and being at the whims of their supplier, but it is nasty for the customers when there simply is no parts OEM that one could go to when the vehicle manufacturer goes out of business or refuses to support the car any further.
> Where you'll get into trouble is when you need replacement computer bits
Oh hell yes. New EU law is particularly to blame here. OBD diagnosis always was nasty enough, you virtually always need to buy expensive diagnosis software and hardware (e.g. Mercedes XENTRY, VW ODIS, BMW ICOM)... but the newest requirements enforce live digital signatures and anti-tamper checks. Nasty as hell. And the buses itself... it's no longer just one CAN bus doing everything, not since the Kia Boys, it's multiple buses of different speeds, some using encryption on the wire, all making diagnosis, troubleshoots and repairs much more difficult than it used to be.
And that is before getting into the replacement parts issue itself that you wrote up.
> Starting up a car brand is orders of magnitude worse.
Tesla did it, and is more valuable than most other car brands added together. They had a novel product: a good EV that was fun to drive. Is that a unique situation? Could a truly autonomous car launch do it?
Your arguments make sense in themselves, but maybe underestimate the revolutionary value that a level 4 car would provide.
> Tesla did it, and is more valuable than most other car brands added together.
Half of Tesla's value is hopium, the rest of it is pure trust in that the current government will continue propping Elon up (even if he personally ran afoul of the Dear Leader). A lot of the promises Elon made, particularly when it comes to FSD, had to be tracked back and I don't see them ever coming to fruition - at least not for the cars that don't have LIDAR hardware.
>Waymo benefits from Google's unparalleled geospatial data.
How much of Waymo's training data is based on LIDAR mapping versus satellite/aerial/street view imagery? Before Waymo deploys in a new city, it deploys a huge fleet of cars that spend months of driving completely supervised, presumably to construct a detailed LIDAR map of the city. The fact that this needs to happen suggests Google's geospatial data moat is not as wide as it seems.
If LIDAR becomes cheap, you could imagine other car manufacturers would add it cars, initially and ostensibly to help with L2 driver aids, but with the ulterior motive of making a continuously updated map of the roads. If LIDAR were cheap enough that it could be added to every new Toyota or Ford as an afterthought, it would generate a hell of a lot more continuous mapping data than Waymo will ever have.
> Before Waymo deploys in a new city, it deploys a huge fleet of cars that spend months of driving completely supervised, presumably to construct a detailed LIDAR map of the city.
Not entirely true. From their recent "road trips" last year, the trend is they just deploy less than 10 cars in a city for a few weeks (3-4 weeks from what I recall) for mapping and validating. Then they come back after a few months to setup infrastructure for ride hailing (depot, charging, maintenance, etc.) and start service.
> difference between suborbital flight and putting a payload in orbit. What looks like a next logical step actually takes 10X or more effort, scale, and testing.
But suborbital flight and payload in orbit is much less of a difference than you might think.
The delta V is not that significantly different. Scale is almost the same, and a little bit more power and (second stage) your payload is now hurtling around the earth instead of falling like an ballistic missile which was what their suborbital predecessors are.
Suborbital ballistic "travel" beyond continental distances, is almost as expensive as orbital. If you can make it to the antipode, you're basically almost orbital.
Suborbital "trips" straight up, beyond the atmosphere, are very cheap.
>Waymo benefits from Google's unparalleled geospatial data.
That's true, and they have a huge headstart, but I wonder if all these cubesat companies can bring the price down on data enough that others will be able to compete.
Maybe. But Google has been there in a sensor laden car, overhead with an airplane, and buying all the access that is available in satellite imagery, and fusing that together in a continually updated model. Plus real time data from a billion maps and navigation users. I pity the fool going up against that.
I don't think Waymo is using Street View / satellite data to drive. They have to build an HD map using a special LIDAR-equipped vehicle before deploying in a new area.
Maybe their navigation system will be better than the competition due to real-time traffic data from Google Maps users, but I don't think it'll be so much better as to be an unbeatable advantage.
They use LIDAR maps in service areas, but might be using Street View data for training? (I imagine it would be really, really difficult to build a useful simulator with just SV imagery, but probably also quite valuable to have the variety of environments.)
did you see this, they can build a simulator from sv images. https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-f... even the lidar part
Time to extend comma.ai!
Yeah, imagine having, say, two of these LIDAR sensors, each pointed towards the car's blind spots. Comma already does well with the car's built-in radar + vision on straight freeway runs, but can't reliably change lanes on its own. The built in blind spot detectors on most cars are a binary "there is/not a car present", which doesn't reliably determine if it's safe to actually do a lane change.
I was using cruise control on the highway yesterday and thinking: this is like very cheap very crude self-driving. And you know what? In its limited UNIX-like way, it's great: the car does a much better job of gradually injecting fuel than I, with my brick-like human foot, can do. Robot 1, human 0.
And from there it's easy to think: couldn't the car also detect white lines and stay within them? It doesn't have to be perfect; it can be cruise control++. If it errs a little, I can save it. But otherwise, this is a function I'd love to use if it was available, for a sub $1000 price point.
I think of Tesla autopilot as sophisticated cruise control. Can perform most driving tasks better than I can, saves a lot of cognitive work, still needs close of my 100% attention.
Is this comment from 2010? Maybe I'm missing your point, but it seems you would be shocked by what modern cars are capable of.
The intention of my comment (possibly unclear) was to say: I know we can do self-driving very well very expensively. But what can we do extremely cheaply?
Like the difference between "what can do we with an LLM on my maxxed-out laptop with an RTX 5090 card" vs. "what can we do with a mac mini." Self-driving car version.
This $200 MicroVision lidar is a short range lidar that produces a really fuzzy point cloud. At best, it can be used for parking. It's unlikely to help self driving cars much at all, much less "reshuffle auto sensor economics".
The mind salivates at the idea of sub-$100 and soon after sub-$10 Lidar. We could build spatial awareness into damn near everything. It'll be a cambrian explosion of autonomous robots.
RIP to every single camera in existence if that happens. Lidar is awful with damaging camera lenses.
I had to look this up, because I had never heard of it. How could a lens be damaged by infrared lasers?
It turns out it’s the sensors that are easily damaged by high powered lidar lasers.
https://spectrum.ieee.org/amp/keeping-lidars-from-zapping-ca...
The next headline will be that it also damages human retinas.
It's not safe just because it's infrared. And the claims that it's safe because of the exposure time is highly questionable, would you be okay with that for any other laser?
There is complains that some Volvo cars damaged iPhone cameras. It’s not even clear if Apple takes those under warranty. We’ve seen car review YouTubers that got their iPhone camera sensors damaged captured (by a second camera) while reviewing
One such review where Marques shows how it happened to his phone
https://youtube.com/shorts/oeHtfMFdzIY?si=cANJDT5BLfdd9ZUT
One highlight from the video, he says most cameras are fine, it's just iphones that don't have a very good IR filter. Which sounds correct, in my experience most cameras have pretty substantial IR filters that have to be removed if you want to photograph IR.
I also wonder if the smaller sensor size on phones contributes, since the energy is being focused onto a smaller spot.
Either way, for that to happen he was filming the LIDAR while active, for a decent amount of time, from right next to the car. I assume under normal conditions it wouldn't be running constantly while the vehicle is stationary?
Is it possible that the iPhone filters are weaker due to FaceID requirements? I seem to recall that FaceID (and similar systems, like Windows Hello) depend on IR to get a more 3D map of the face, so it'd make sense that they want to be more sensitive in that range.
Laptops aren't generally being used in the same areas as cars though, so you wouldn't expect to see as many cases involving Windows Hello compatible laptops/cameras.
That wouldn't make sense on the back of the phone.
Possibly. Some models of iPhone use LIDAR for AR tooling as the measure app
If this is true, the eyes are no better. Especially as it can't be seen, who will look awsy? And at night, with open irises?
There was someone who had his eyes damaged by sitting next to a heater.
Are the eyes really "no better" in this scenario? From the above article it seems we tuned the behavior to the eye specifically (but not necessarily image sensors):
> Moving to a longer wavelength that does not penetrate the human eye allows new lidars to fire more powerful pulses and stretch their range beyond 200 meters, far enough for stopping faster cars. Now a claim of lidar damage to the charge-coupled-device (CCD) sensor on a photographer's electronic camera has raised concern that new eye-safe long-wavelength lidars might endanger electronic eyes.
> Producers of laser light shows are well aware that laser beams can damage electronic eyes. “Camera sensors are, in general, more susceptible to damage than the human eye,” warns the International Laser Display Association
"doesn't penetrate the human eye" seems a bit hand wavy, but I take it to mean "these length pulses in this wavelength are tuned to have the power not be enough to damage the eye". Camera lenses may not have the same level of IR filtering/gathering area or, if they do, there is nothing implying the image sensor has the exact same tolerances as the inside of the eye. From the same:
> Sensor vulnerability to infrared damage would depend on the design of the infrared filters
A heater usually damages the eyes through drying out/heating up the outside layer with constant high intensity, not by causing damage to the retina (post filtering). https://hps.org/publicinformation/ate/q12691/
> Furthermore, since the eye blocks the IRR, the eye begins to overheat leading to eye damage and possible blindness. Because of this, you should not look at the heater for an extended period of time.
Enough intensity of any wavelength is enough to damage any camera or eye of course, but the scenario here seems to be built around that question for the eye. Similarly, I've heard of Waymo's causing 6 mph accidents but no reports of eye damage from any car LiDAR. Despite that, in the above YouTube clip Marques Brownlee actively shows his camera being clearly damaged as its moved around.
> The biggest concern is not photographic cameras but rather the video cameras mounted on autonomous cars to gather crucial information the cars need to drive themselves.
So they don't care if that breaks my phone camera? Wtf?
The Epstein classes argument is: If youre not my property, why should We care?
Is there any deeper study on long term effects regarding retinal damage?
I would imagine, even with safe dosages, there would be some form of cumulative effect in terms of retinal phototoxicity.
More so if we consider the scenario that this becomes a standard COTS feature in cars and we are walking around a city centre with a fleet of hundreds of thousands of these laser sources.
Some lidar units simply use the wavelength that the human eye is opaque to.
The grandparent comment is about camera lenses with little to no near infrared cutoff filter. Some older iPhones were like that and that was the original breaking story.
> human eye is opaque to
Absorbing the laser isn't necessarily any good. Very hypothetically it could lead to cataracts.
Sun emits much stronger IR, near-IR, UV
Absolutely, and is a major cause of cataracts. Somewhat near 100% of people with lenses in their eyes will get cataracts eventually if they are ever exposed to unfiltered sunlight.
And staring directly at the sun is not recommended.
That's why we don't look at it.
I remember those old cellphones with weak IR filters. It was a scandal because light clothing turns out to be more transparent to IR than to visible light so they were acting as a sort of clothing "X-Ray" in bright light. Creepers on the Internet tried to start a whole new genre of porn but were shut down in a hurry by cellphone manufacturers adding robust IR filters on the next generation of smartphones.
Shame that perverts had to ruin that for us, it was kinda neat to point a TV remote as the camera and see the bulb light up.
I suspect we can't quantify human eye-damage enough to easily rule-out chronic effects... until it's too late for the patient.
I wish this was true. It'd immediately be the best way to fight surveillance systems like Flock
iPhones have had lidar for years, have cameras been affected?
Other cameras. When the lidar laser points at the camera sensor.
Could be a gain for privacy ;-)
TIL!
Thanks! What a headache
we'd likely see new coatings and sensor designs that avoid it, not trivial but also not the end of the world
What? Please explain!
Sensor damage
https://youtube.com/shorts/oeHtfMFdzIY?si=hpLBgqom_kHVPuhL
The short-range stuff is already $150-300 per unit. If you're thinking indoor robots that's already technically feasible. Over 25% of all Chinese cars being produced today have LiDAR.
Even mid-range sensors used in ADAS systems only cost $600-750. The long-range stuff that's needed for trucking or robotaxis is $1,500–6,000
There are already very good sub-$100 lidars, especially for 2D since they were made en masse for vacuum cleaners. E.g. the LD19 or STL-19P as they're calling it now for some reason. You need to pair them with serious compute to run AMCL with them, plus actuation (though ST3215s are cheap and easy to integrate now too) and control for that actuation which also wants its own compute, plus a battery, etc. the costs quickly add up. Robotics is expensive regardless of how cheap components get.
I think the difference is that these are intended for automotive use and have a much longer range than the ones in your Roomba.
True, you have to go up to $120 for the 25m version, or $450 for Unitree's L2 which gets 30m in 3D. That's about as much you could possibly ever need unless you're making high speed vehicles that need more reaction time. In which case you probably shouldn't be relying on the cheapest thing on the market :)
RIP to humans under authoritarian regimes?
And, I guess, even more advanced surveillance.
I think we’re well past the point where mass surveillance was a technical challenge. Mass oppression through autonomous violence however…
Even back when Snowden was current news, we'd reached the point where laser microphones could cover every window in London for a bill of materials* less than the annual budget of London's police force.
* I have no way to estimate installation costs, but smartphones show that manufacturing at this scale doesn't need to increase total cost 10x more than the B.o.M.
https://en.wikipedia.org/wiki/Optimus_(robot)
LIDAR would be preferrable to cameras when it comes to privacy actually
I don't think it makes a difference. Dense lidar goes you more information than 2d colour imagery.
There are SLAM cameras that only select "interesting" points, which are privacy preserving. They are also very low power.
People saying LIDARs can't recognize colors or LIDARs can't take pictures don't know what they are talking about.
They're just fancy cameras with synced flashes. Not Star Trek material-informational converting transporters. Sometimes they rotate, sometimes not. Often monochrome, but that's where Bayer color filters come in. There's nothing fundamentally privacy preserving or anything about LIDARs.
I don't know what I'm talking about, but isn't the wavelength of the laser pretty limiting to the idea of just slapping a Bayer color filter on? Like, if the laser is IR (partly so they're not visually disrupting all the humans around them), the signal you get back doesn't the visual spectrum sections that you'd need to get RGB right?
> LIDAR would be preferrable to cameras when it comes to privacy actually
Right, but how likely is it that there will be LIDAR and no cameras (especially given the low cost of the latter)?
I’d definitely feel much better if most cameras in the world were replaced by LIDAR. I feel like it would be much tougher to have a flawless facial recognition program with LIDAR alone
Who needs facial recognition if you can identify people based on gait?
Gait recognition is almost entirely hype. Sure it works to tell the difference between n = 10 people but so what, you can tell the difference between a group of 10 people by what kind of shoes they are wearing.
Judicial systems where a 6% error rate is deemed way too high to lead to a conviction.
Then you combine it with some other technique, eg tracking daily routes of individuals, to lower the error rate. You only need a handful of bits to distinguish all inhabitants of the average city. But imho that error rate would likely be low enough for some judge to authorize more invasive surveillance of suspects thus identified.
The minute internet became widespread it was game over.
Pros and cons. :/
It'll never happen, but we need a bill of rights for privacy. The laypeople aren't well-versed or pained enough to ask for this, and big interest donors oppose it.
Maybe the EU and states like California will pioneer something here, though?
Edit: in general, I'm far more excited by cheap lidar tech than I am afraid of the downsides. We just need to be vigilant.
Lidar doesn’t really give you much to “see”, just shape and distance…so I’m a bit confused how it can be used for invasive surveillance, do you mean when fused with vision input it somehow allows it to infer more privacy stuff?
The EU already has. GDPR and the AI Act puts a lot of limits on what you can do in the open space, although it doesn't always go far enough.
And barely gets enforced
2775 fines for a total of €6.8B since July 2018. It's not A LOT (I would hope for A LOT MORE fines), but it's not nothing.
https://www.enforcementtracker.com/
It’s very interesting. Thanks for sharing.
But also kinda weird. There seems to be a lot of fines for hospitals for example.
Some Portuguese hospital was fined €400,000 for ‘Insufficient technical and organisational measures to ensure information security’
Medical, banking and insurance are three industries that the European data privacy watchdogs are much more strict about because of the potential for damage.
https://en.wikipedia.org/wiki/GDPR_fines_and_notices
Top 5 fines:
1 - Meta - Ireland - €1.2 billion
2 - Amazon Europe - Luxembourg - €746 millions
3 - WhatsApp - Ireland - €225 millions
4 - British Airway - UK - £183 millions
5 - Google - France - €60 millions
I wish every law barely got enforced this way.
pretty pathetic, but people keep insisting you can regulate capital
I'd say the numbers listed here prove the GPs point of poor enforcement. The largest fine is roughly 0.97% of Meta's 2023 revenue, the equivalent of a $600 fine for somebody making 60k / year. It's a tiny-tiny cost of doing business at best, definitely not a deterrent, given Meta's blatant disregard for GDPR since then.
> the equivalent of a $600 fine for somebody making 60k / year
I don't know about you, but on that income I would certainly not brush off such a fine as a "cost of doing business". Would it cause me financial trouble, or would it force me to sacrifice other expenses? Absolutely not. But would I feel frustrated at having to pay it, feel stupid for my mistake, and do my best to avoid it in the future? Absolutely yes.
My bad, a better analogy would be a dealer making 60k / year selling drugs, gets caught by police and is fined $600. I wouldn’t expect them to change much.
Fair enough. In that sense I do see value in the analogy.
Would you still do your best to avoid it if that involved taking a pay cut of more than $600/year?
1% of Meta's global revenue is a tiny-tiny cost of doing business? At that point, I think I can stop even trying to argue here. It's a massive fine any way you put it. Especially when you consider the ceiling hasn't been reached and non compliance is more and more costly by design.
Their net profit was $60billion in 2024. This is peanuts. It can fluctuate by multiples of this fine in a month, depending on whether or not they've had a bad or good month, nevermind year. This pretty much is just a cost of doing business.
It's not even 1% of their annual revenue, let alone the entire multi year period they've been in breach before and since. It's nothing to them.
The interesting part is that it keeps going up. You seem to believe we have somehow reached a cap where Meta can just expense it as a cost of doing business. That's not how European law works. The fine maximum is far higher and repeated non compliance keeps making the fines higher and higher. It's a ladder not a sizing precedent.
Unfortunately it doesn't in practice. Meta's total revenue since 2018 when GDPR came into force is just shy of $1T. Even with all the smaller fines combined, the total amount of GDPR related fines is in the range of $3B. It's a rounding error.
There isn't a trend of increasing fines, nor has any fine even reached the cap, let alone applied multiple times for the recurring violations. Even more with the current US administration's foreign policy towards the EU.
While GDPR as a law is fine, with the exception of enforcement limitations, enforcement so far has been a complete joke.
Maximum GDPR fine is 4% of global revenue in the previous year. If a company has 30% profit margin then they can, in theory, treat is as a cost of doing business, indefinitely.
It's 4% per fine. Each violation is a fine and Meta owns multiple companies that can be fined. But 4% of global revenue already can't be treated as just a cost of doing business. Their shareholders would murder them.
Humanity has never known a world without surveillance. Responsibility cannot exist without being watched. Primitive tribes lived under the constant eye of the group, and agricultural eras relied on the strict oversight of the clan. Modern states simply adopted new tools for an ancient necessity. A society without monitoring is a society without accountability, which only leads to the Hobbesian trap of endless conflict.
Mass surveillance is a relatively recent development. Dense urban civilizations are not. And yet their denizens have not historically devolved into a “nasty, brutish, and short” existence. In fact, cities have been centers of culture and learning throughout history. How does this square with your theory?
The 19th century was the true cradle of mass surveillance. Civil registration, property tracking, and institutionalized police forces provided the systemic oversight required to manage dense urban life. These administrative tools served as the analogue version of digital monitoring to ensure every citizen remained known and categorized. Cities thrived as centers of culture only because these new forms of visibility prevented the Hobbesian collapse that anonymity would have otherwise triggered.
And what about all of the previous ~40-50 centuries where cities were centers of learning and art and not Hobbesian hell holes? Ur is slightly older than the 19th century, I believe.
And note that there is evidence for cities of tens of thousands of inhabitants from 3000 BCE, while Rome reached 1 000 000 residents by 1CE. Again, without becoming some Hobbesian nightmare.
None of those things are remotely comparable to the surveillance we're talking about. There's a world of difference between, "My city knows who owns what properties and also we have a police force", and "Western intelligence agencies scoop up every bit of data they can grab about anyone on the planet and store it forever"
In my country it wasn't until the late 19th century that someone had the balls to stop going to church on Sunday. It was a huge scandal at the time but it all worked out in the end.
Humans have always done mass surveillance on eachother. You don't need technology for that.
At no point in time before this era was it possible for a random bureaucrat to have a reasonably comprehensive list of everyone in a country who attended church yesterday.
Scale matters.
This is a reduction to absurdity. Those old societies you cite didn't actively surveil with the goal of micromanaging people's daily lives the way that modern ones do.
Rural surveillance was far more suffocating because every single action was subject to the community gaze. This is exactly why classic literature frames the journey to the city as a liberation from the crushing weight of the village eye. The idea of the peaceful countryside is a modern utopian fantasy that ignores how ancient clans dictated every aspect of life including marriage and death. Modern Homeowners Associations prove that localized oversight is often the most intrusive form of management. Ancient society did not just monitor people; it owned their entire existence through inescapable social visibility.
"It was always shit everywhere" is revisionist history born out of the fantasy of statists looking to justify the modern (administrative) enforcement state.
While the lack of anonymity in small towns certainly puts a damper on one's ability to deviate too far from social norms, the list of things and subject that could get you subjected to government violence without creating a victimized party was infinity shorter. Things that get state or state deputized enforcers on your case today were matters of "yeah that's distasteful, he'll have to settle that with god" or it would come back to bite you when something happened 150+yr ago because society did not have the surplus to justify paying nearly as manny people to go around looking for deviance that could be leveraged to extract money. These people had way more practical day to day freedom to run and better their lives than we do now, if constrained by the fact that they had substantially less wealth to leverage to that effect.
> Modern Homeowners Associations prove that localized oversight is often the most intrusive form of management
And they almost exclusively deal in things that historical societies didn't even bother to regulate.
You're beyond delusional if you think running afoul of HOA is worse than running afoul of the local, state or federal government. Yeah they can screech and send you scary letter with scary numbers but they don't get the buddy treatment from courts that "real" governments do (to the great injustice of their victims) and their procedural avenues for screwing their victims on multiple axis are way more limited.
Seriously, go get in a pissing match with a municipality over just where the line for "requires permit" is and get back to me. Unless you want to do something that is more than petty cosmetic stuff and unambiguously in violation of the rules a HOA is a paper tiger for the most part (not to say that they don't suck).
That's an incredibly bullshit argument to defend the indefensible.
Your reaction actually proves the point. Aggression thrives in anonymous spaces because the lack of oversight removes the weight of accountability. When people feel unobserved, they quickly abandon the social friction that once held tribes and clans together. You are essentially providing a live demonstration of why a society without any form of monitoring inevitably slides into the Hobbesian trap.
I don't think a random internet comment proves anything about society at large.
People don't hesitate to be aggressive even when they're not anonymous and there's a threat of accountability - see, all crime, or people just acting shitty toward others.
Mass surveillance does not cause everyone to magically get along.
History shows that whenever surveillance gaps appear, chaos follows. The explosion of crime during early urbanization was the specific catalyst for the creation of modern police forces because traditional social bonds had failed to provide oversight in growing cities. Japan maintains its safety through a deep-rooted culture of mutual neighborhood monitoring that leaves little room for anonymity. Even China successfully quelled the violent crime waves of its early economic boom by implementing a sophisticated surveillance network.
Police forces nor "neighborhood monitoring" are equivalent to mass surveillance though.
Anyway I'm curious why - despite having less anonymity than at any point in history, at least from the perspective of law enforcement - we still see high crime rates, from fraud to murders?
I've always wondered if Tesla's issues with FSD were a sensor problem or an intelligence problem. I think Tesla's claim is that when they look at accident footage, it is clear to a human how the car could have avoided the accident, and thus, if FSD was more intelligent, the accident could have been avoided. Is this reasoning wrong?
I personally find it convincing that the problem with self-driving is mostly that the models aren't intelligent enough, and that adding LiDAR wouldn't be enough to achieve the reliability required. But I don't know, I don't really work in that field so maybe engineers who have more experience with self driving might say otherwise.
It is easy to underestimate how much one relies on senses other than vision. You hear many kinds of noises that indicate road surface, traffic, etc. You feel road surface imperfections telegraphed through the steering wheel. You feel accelerations in your butt, and conclude loss of traction from response of the accelerator and motion of the vehicle. Secondly, the human eye has much more dynamic range than any camera. And is mounted on an exquisite PTZ platform. Then turning to the model -- you are classifying obstacles and agents at a furious rate, and making predictions about the behavior of the agents. So, in part I agree that the models need work, but the models need to be fed, and IMHO computer vision is not a sufficient sensor feed.
Consider an exhaust condensation cloud coming from a vehicle's tail pipe -- it could be opaque to a camera/computer-vision system. Can you model your way out of that? Or is it also useful to do sensor fusion of vision data with radar data (cloud is transparent) and others like lidar, etc. A multi-modal sensor feed is going to simplify the model, which in the end translates into compute load.
> I've always wondered if Tesla's issues with FSD were a sensor problem or an intelligence problem
Even if it’s an intelligence problem, it’s possible that machine intelligence will not get to the point where it can resolve anytime soon, whereas more sensors might circumvent the issue completely. It’s like with Musk’s big claim (that humans use camera only to drive); the question is not if a good enough brain will be able to drive vision-only, but if Tesla can make that brain.
maybe? But also LiDAR just gives a more complete picture of what is around the car. I think this is supported by how many miles waymo cars run unsupervised vs Tesla.
I am skeptical that tesla has this solved but interested in seeing how it goes when as they move to expand their robotaxi service.
Some problems are simply undecideable: if for identical inputs the desired output varies wildly, you simply need more information. There is no algorithm that will help you.
Sensors or intelligence, at the end of the day it’s an engineering problem which doesn’t require pure solutions. Sometimes sensors break and cameras get covered in mud.
The problem is maintaining an acceptable level of quality at the lowest possible price, and at some point you spend more money on clever algorithms and researchers than a lidar.
Tesla is adding radar and I predict before long it will add LiDAR because that's the only way to get to Level 3, which is a requirement for moving forward in California
https://www.fccidlookup.com/report/tesla-new-millimeter-wave...
Some internet comments said this was for detecting people in the cabin, not for detecting things outside the car.
They're not reading the actual FCC document then, which says:
Strategy: The move brings Tesla's sensor approach closer to competitors like Ford, GM, and Rivian, who utilize multi-modal systems (cameras plus radar) for their driver-assistance features.
Potential: This 'HD radar' could provide critical redundancy and data needed for achieving higher levels of driving automation and improving system performance in all conditions.
Low cost, sub $200 automotive grade LIDAR sensors are already available.
Cepton Technologies offers Nova [0], Nova-Ultra [1] sensors both at a sub-$100 price point [2]. These feature a 120°(H) x 90°(V) FOV at 50m, with 2.7M points per second sampling.
Velodyne introduced Velabit in 2021, for $100. Boasting 100m range and a 60-degree horizontal FoV x 10-degree vertical FoV.
The article claims that:
> What distinguishes current claims is the explicit focus on sub-$200 pricing tied to production volume rather than future prototypes or limited pilot runs.
which is simply not true. Cepton (currently offering) and Velodyne (acquired by Ouster in 2023) have done this for years.
99% of LiDAR production is just 4 Chinese companies. Yes low-range systems are already at the $150-300 range, but MicroVision is promising to produce this in the Washington.
Basically they're saying "we can catch up to China by 2028/2029" ||so please subsidize us||
>Cepton Technologies offers Nova [0], Nova-Ultra [1] sensors both at a sub-$100 price point
Where? How? I'm only seeing the Nova on ebay for between $4000 and $5000.
Cepton primarily operates B2B, as B2C demand for specialized LIDAR like this is pretty low. Anything you find on eBay is either a leftover dev kit or salvage. This is pretty much the case for MicroVision, Ouster etc.
Interestingly, there have been people in the LIDAR industry predicting costs like this for many years. I heard numbers like $250 per vehicle back in 2012 [1]
Of course, ambitious pricing like this is all about economies of scale - sensors that are used in production vehicles are ordered by the million, and that lowers the costs massively. When the huge orders didn't materialise, the economies of scale and low prices didn't materialise either.
[1] https://web.archive.org/web/20161013165833/http://content.us...
Also 'Luminar Technologies, a prominent U.S. lidar manufacturer, filed for Chapter 11 bankruptcy in December 2025' LIDAR is useful in a small set of scenarios (calibration and validation) but do not bet the farm on it or make it the centre piece of your sensor suite.
Also, MicroVision, the company in OP's article bought the IP from Luminar. This feels like a circular venture capital scam. Luminar originally went public via SPAC and made a bunch of people very wealthy before ultimately failing.
The same Luminar from the Mark Rober video?
https://www.forbes.com/sites/bradtempleton/2025/03/17/youtub...
This is very wrong. LIDAR scanners have revolutionized surveying by enabling rapid, high-precision 3D mapping of terrain and infrastructure, capturing millions of data points per second. LIDAR can penetrate dense vegetation, allowing accurate, ground-level, mapping in forested or obstructed areas. Drone mounted LIDAR has become very popular. Tripod mounted LIDAR scanners are very commonly used on construction sites. Handhels LIDAR scanners can map the inside of buildings with incredible accuracy. This is very commonly used to create digital twins of factories.
And none of this is on the order of magnitude that consumer automotive would have.
The EU requires every new car to have Autonomos Emergency Braking. If LiDAR becomes cheaper than radar, this is a potential market of millions.
Lidar is critical for any autonomous vehicle. It turns out a very accurate 3D point cloud of the environment is very useful for self driving. Crazy, I know.
Useful but not at all required. Camera + radar is sufficient for driving, and camera+ USS is fine for parking.
Radar is just cheaper than the number of cameras and compute, it's also not really a strict requirement.
Look at how the current cars fuck up, it's mostly navigation, context understanding, and tight manoeuvres. Lidar gives you very little in these areas
All of the actually WORKING self driving systems use LIDAR. This is not a coincidence.
I work with programs approaching L3+ from L2, with the requirement that the system works for 99% of roads (not tesla before people start fixating on that).
We find that the cases where lidar really helps are in gathering training data, parking, and if focused enough some long distance precision.
None of these have been instrumental in a final product; personally I suspect that many of the cars including lidar use it for data collection and edge cases more than as part of the driving perception model.
Accidents are not normal driving situations but edge cases.
Sort of; accidents are the absolute core of the product. They are rare, but they are the focus of the design.
By edge cases I mean scenarios like the lights going out in an underground garage; low vision due to colourful smoke or dust, or things like optical illusions or occlusion that a human would just need to remember.
Lidar can help, but not really enough to be worth it.
Lidar is by far the most accurate source of range data. You need to explain why Waymo and Zoox use lidar in direct contradiction to what you claim.
Waymo is the best current autonomous driving system and Waymo uses LIDAR. This is because LIDAR is an incredibly effective sensor for accurate range data. Vision and Radar range data is much less accurate and reliable.
Waymo used LIDAR in the realtime control loop. It combines LiDAR, camera, and radar data in real time to build a 3D representation of the environment, which is constantly updated.
I fundamentally don't trust any level 4 system that doesn't use LIDAR
Yes, I am aware of waymo... What they do is impressive. However they don't have a product that works for all highways yet, that's the space I work in, and we have no real fixation on lidar... It's nice but not a requirement, and hard to justify the cost unless you can make sales because of it (and there are some places where this is the case, but not everywhere)
You don't need the mm precision of lidar very often; we find that it offers nothing at speed over radar; and in tight manoeuvres the cameras we need for human park assist and ultrasonics do well enough.
It in not more accurate; but it is more precise, but that doesn't really matter. (Radar gives you relative speed directly, this is more important than a very precise point at highway speeds).
Waymo is level 4. I think currently it is nearly impossible to make a level 4 system as safe as Waymo without Lidar. Maybe new 4d imaging radar or THz radar could change this. Sensor modalities have physics-based limitations, current camera+radar isn't sufficient for L4.
Like Waymo? (https://dmnews.co.uk/waymo-robotaxi-spotted-unable-to-cross-...) 17 years after betting the farm on LIDAR the solution fails to navigate a puddle. Sorry but they bet on the wrong technology, Tesla has overtaken them with multi camera and NN solution.
> Tesla has overtaken them with multi camera and NN solution.
Let me guess, you heard this from Elon?
Your conclusion from a single incident is a bad inference. One vehicle getting confused by a puddle (likely a sensor fusion edge case or mapping artifact, not a fundamental LIDAR failure) doesn't indict the technology. Tesla's cameras have produces vastly more failures.
Waymo has driven tens of millions of autonomous miles with a serious injury/fatality rate dramatically lower than human drivers. The actual data shows the technology works. Tesla FSD still requires active driver supervision and is not legally or technically a robotaxi system. Comparing them as if they're at parity is wrong.
LIDAR gives direct metric depth with no inference required. Camera-only systems must infer depth from 2D images using neural networks, which introduces failure modes LIDAR doesn't have. Radar is very valuable when LIDAR and cameras give ambiguous data.
What metrics has Telsa overtaken Waymo? Deployed robotaxi revenue miles? No. Disengagement rates? No published comparable data. Safety per mile in driverless operation? No.
A Tesla wouldn't stop for a puddle. Also its not locked to a small geofenced area (people have driven coast to coast without a single intervention on FSD including parking spot to parking spot) when I can buy a Waymo vehicle that does this then Waymo would have caught up with Tesla.
Wow, so it can cope with driving on the highway. That's the easy part.
Your puddle example is utterly irrelevant. Tesla's are notorious for phantom breaking. Robotaxis are very much locked to tiny geofenced areas. Some even shaped like a penis because Musk is such a child.
"people have driven coast to coast without a single intervention on FSD including parking spot to parking spot"
I find this claim very dubious. Prove it. Teslas never drive empty for a very good reason.
Err they have lots of Model Ys in Austin as Robotaxis right now with no drivers. I guess this is also 'dubious'. Look it's clear you have a huge bias I would urge you to read up on https://grokipedia.com/page/List_of_fallacies otherwise your emotional responses will blind you to reality.
> Look it's clear you have a huge bias I would urge you to read up on https://grokipedia.com/page/List_of_fallacies otherwise your emotional responses will blind you to reality.
Writing this and linking to fake Wikipedia is actually hilarious.
"hey have lots of Model Ys in Austin as Robotaxis right now with no drivers"
They do not. They have a very small number of them open to a select number of people, not the general public. And they are limited to even smaller areas. You need to understand that Musk is NOT an engineer, he is more of a con man desperate to inflate tesla stock price. If he says self driving cars don't need LIDAR then they must actually need it.
https://futurism.com/future-society/polymarket-fortune-betti...
Polymarket user David Bensoussan has made $36,000 by betting against Musk's wildly optimistic self driving predictions.
linking to grokipedia feels like intentional rage-baiting.
Who should I believe a random poster on hackernews who has likely an average salary or Elon Musk who is the richest man in the world and create multiple trillion dollar companies......hard one!
Whats wrong with grokipedia its a bit less woke/far left wing, more balanced.
You are just mindlessly regurgitating the lies of Musk and using an "appeal to wealth" to justify not analyzing them. He has been lying about self driving since 2016.
https://www.forbes.com/sites/alanohnsman/2025/08/20/elon-mus...
https://futurism.com/leaked-elon-musk-self-driving
For nearly a decade Elon Musk has claimed Teslas can truly drive themselves. They can’t. Now California regulators, a Miami jury and a new class action suit are calling him on it.
https://en.wikipedia.org/wiki/List_of_predictions_for_autono...
Economies of scale when they are in phones?
'MicroVision says its sensor could one day break the $100 barrier'. When an article says one day, read not in the next decade.
Around a decade ago the nascent LIDAR industry boomed and dozens of startups emerged out of nowhere all racing to make cheap automotive grade LIDAR, and here we are.
Of course MicroVisiom is only claiming their LIDAR to be suitable for advanced driver assist, but ADAS encompasses a wide array of capabilities: basically everything between cruise control and robotaxis, so there's no definition of how much LIDAR you need to do the job, just however much you feel like. Tesla feels like none at all.
Interesting to see the cost curve drop ... this always changes the market.
I have been watching the sensor space for a while. Cheap LIDAR units could open up weird DIY uses and not just cars. ALSO regulatory and mapping integration will matter. I tried to work with public datasets and it's messy. The hardware is only one part! BUT it's exciting to see multiple vendors in the space. Competition might push vendors to refine the software stack as well as the hardware. HOWEVER I'm keeping an eye on how these systems handle edge cases in bad weather. I don't think we have seen enough data yet...
> Cheap LIDAR units could open up weird DIY uses and not just cars.
Interestingly, there are already some comparatively cheap LIDAR units on the market.
In the automotive market, ideally you need a 200m+ range (or whatever the stopping distance of your vehicle is) and you need to operate in bright direct sunlight (good luck making an eye-safe laser that doesn't get washed out by the sun) and you need more than one scanning plane (for when the car goes over bumps).
On the other hand, for indoor robotics where a 10m range is enough and there's much less direct sunlight? Your local robotics stockist probably already has something <$400
Neato from San Diego has developed a $30 (indoor, parallax based) LIDAR about 20 years ago, for their vacuum cleaners [1].
Later, improved units based on the same principle became ubiquitous in Chinese robot vacuums [2]. Such LIDARs, and similarly looking more conventional time-of-flight units are sold for anywhere between $20-$200, depending on the details of the design.
[1] https://scholar.google.com/scholar?q=%22A+Low-Cost+Laser+Dis... [2] https://github.com/kaiaai/awesome-2d-lidars/blob/main/README...
Sounds like the quality isn't all that great but LD06 sensors look like they're about $20 and someone who works on libraries about this suggested the STL27L which seems to be about $160 and here's an outdoor scan from it: https://sketchfab.com/3d-models/pidar-scan-240901-0647-7997b...
Not sure if the ld06 is a scanner like this or if it's just a line (like you'd use for a cheaper robot vac).
Microvision has been saying that from half a decade, products? Nowhere to be found.
So tiring to keep hearing this argument "humans only use vision to drive, so why would self driving cars need more?"
This argument is inherently anti-progress. It's like saying human had been using sextants to navigate for hundreds of years, why GPS?
A more sensible question is, why not?
I wonder if Comma.ai will ever be open to incorporating this into openpilot.
I always thought the argument that humans are adequate drivers and hence only cameras was not great. Why not actually be better than humans at sensing and driving?
> laser pulses
> phased-array
I'm not well versed into RF physics. I had the feeling that light-wave coherency in lasers had to be created at a single source (or amplified as it passes by). That's the first time I hear about phased-array lasers.
Can someone knowledgeable chime in on this?
The beam is split and re-emitted in multiple points. By controlling the optical length (refractive index, or just the length of the waveguide by using optical junctions) of the path that leads to each emitter, the phase can be adjusted.
In practice, this can be done with phase change materials (heat/cool materials to change their index), or micro ring resonators (to divert light from one wave guide to another).
The beam then self-interferes, and the resulting interference pattern (constructive/destructive depending on the direction) are used to modulate the beam orientation.
You are right that a single source is needed, though I imagine that you can also use a laser source and shine it at another "pumped" material to have it emit more coherent light.
I've been thinking about possible use-cases for this technology besides LIDAR,. Point to point laser communication could be an interesting application: satellite-to-satellite communication, or drone-to-drone in high-EMI settings (battlefield with jammers). This would make mounting laser designators on small drones a lot easier. Here you go, free startup ideas ;)
In principle, as the sibling comment says, you could measure just the phase difference on the receiver end. The trick is that it's much harder for light frequencies than radar. I'm non even sure we can measure the phase etc of a light beam, and if we could, the Nyquist frequency is incredibly high - 2x frequency takes us to PHz frequencies.
There might be something cute you can do with interference patterns but no idea about that. We do sort of similar things with astronomic observations.
A phased array is an antenna composed of multiple smaller antennas within the same plane that can constructively/destructively aim its radio beam within any direction it is facing. I'm no radio engineer but I think it works via an interference pattern being strongest in the direction you want the beam aimed. This is mostly used in radar arrays though I suppose it could work with light too since it is also a wave.
I think about it like a series of waves in a pool. One end has wave generators (the lasers) spaced appropriately such that resulting waves hitting the other end interfere just right and create a unified wavefront (same phase, amplitude, frequency).
NB: just my layman's understanding
Not an expert, but main challenges with laser coherency are present when shaping the output using multiple transmitters.
For lidar you transmit a pulse from a single source and receive its reflection at multiple points. Mentioning phased array with lidar almost always means receiving.
Are we sure these things aren’t damaging our eyes? It’s lasers shooting all over the place right?
When designed, built, installed and calibrated correctly, the power and wavelengths used are not considered harmful to humans.
What are the chances some non-trivial proportion of the millions of cars on the road will not have their LIDAR designed, built, installed or calibrated correctly? I suspect this is going to be a recognized public health issue in a decade or two. (It will likely be an issue well before that, but unrecognized...)
There is an incentive to use higher power. Push the edge of safety limits to achieve higher performance from lower cost devices, for example.
It occurs to me there is an opportunity here. Passive lidar detectors sampling fleets of vehicles in the real world, measuring compliance and detecting outliers, would be interesting. A well placed, stationary device could sample thousands of vehicles every day. Patterns will emerge among manufacturers. Failure modes will be seen.
Cursory queries on this reveal nothing. Apparently, no one is doing this. We're all relying on front end certification and compliance. No thought given to the real world of design flaws, damage, faulty repairs, unanticipated failure modes, etc.
Apparently there are lidar jammers. I bet those are rigorously compliant with Class 1 safety regs... No one manufacturing those is ever going to think; "hey, why not a 50W pulse train?"
For everyone of those safety measures to be intentionally bypassed or ignored, the numbers are assuredly non-zero.
But is it going to raise to a level of concern? I don't think we're going to see a ton of cars with blinding lasers installed, unless they are installed to intentionally blind people.
If you have used face I'd, or someone has used a face detection on modern smart phone on you, or if you've pulled up to a modern intersection, you've been blasted with lasers. It may come one day where that's the largest concern but today it's not my primary problem and investing in FUD isn't going to bring any benefits.
There's going to be an expanding market for laser-proof sunglasses.
That's a lot of qualifiers. And replace "humans" with "cameras" and I'm reminded that despite their well-intentioned efforts Volvo has failed there already.
It really isn't though. It's how you do something correctly. Drill into the details of just about any system and you'll see there's a lot of assumptions based on the layers above and below.
A good safety system requires multiple of these failures to occur together to become unacceptable in risk.
This is why we create regulations and inspectors.
I get pretty ticked when people shine laser lights in my direction regardless of their intensity, so I'm not too thrilled about the idea of invisible lasers hitting me square in the pupil without my knowledge.
There are laser measurers sold for a few buck on Temu. Robot vacuums sold for few hundred dollars have Lidars that map out the room in a seconds.
Is there any actual technical reason why automobile Lidar be expensive? Just combine visual processing with single point sampler that will feed points of interest and accurate model of the surroundings will be built.
Most spinning robovac LIDARs are 2D. Most solid state robovac LIDARs are like 8x8 array of laser pointers.
Automotive LIDARs are like, 128x64[px] for production models or 1920x1080[px] for experimental models with GbE and/or HDMI-equivalents-of-industry outputs. Totally different technologies.
Probably one factor is range. The article talks about 200-300m range, a robot vacuum has maybe 10m best case?
For example this one has 120m range with 1cm accuracy and its 15 euros: https://www.temu.com/bg-en/-digital-laser-distance-meter-50m...
Outside in the sun against other cars or inside against a wall?
Is the 1 cm spec 1σ (or less) or worst-case? It’s a safety-critical application.
Oh my god so many reasons. I don't feel like getting fully into it but that's kind of like asking why you can't use your kitchen scale to measure highway traffic as it drives over it.
I know that automotive parts of the standard requirement to withstand 80°C (or 120°C for military use). A robot vacuum working in a living room can probably be made cheaper because it does not have to face as harsh environments?
Also, range is probably a factor. In a living room, you probably need something like 20m max. You car should "see" farther.
Sure, these are the assumptions but silicon is silicon, copper is copper and solder is solder. They don't use easy melting electronics in vacuums and hardened stuff in cars, the tech is about the same unless it is supposed to work in highly radioactive environment. The plastics are different but car interiors are full of plastics, so its unlikely that the costs of temperature resistant plastics needed for this is more than a cupholder.
As for the range, again pretty powerful lasers are sold for sub 10SUD prices on retail. I am sure that there must be higher calibration and precision requirements as the distance increase but is it really order of magnitudes higher? 120 meters laser measurer with 1cm accuracy is 15 Euros on Temu and that thing has an LCD screen and a battery as a handheld device. How much distance do you actually need?
Not only that but vibrations play a big part as well, especially on ICE vehicles.
Vibrations are surely an issue with electromechanical systems but hardly with electronics. There are plenty of cheap electronic accessories for cars and you can observe that those keep functioning for years.
Please keep politics out of it.
ICE = internal combustion engine
to add to the rest of the comments, a reliability standard also adds on cost. The scale is different, but compare a car bolt vs manned space mission craft's bolt.
@dang .... do these comments seem organic to you? old accounts with almost zero karma going out of their way to use the same verbiage to compliment waymo 18 minutes after an article gets posted? .... dead internet at work.
Please don't post like this. If you suspect something, please email us (hn@ycombinator.com) with links to specific comments. The guidelines are clear abut this:
Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.
Anytime a Tesla or Elon related article is posted it gets a barrage of negative comments usually FUD like. Any neutral or positive comment gets downvoted heavily. Bit suspicious to say the least, very clear pattern, they are not doing it very well should be a bit more nuanced.
There is no evidence of any such organised campaign. The critical comments we see against that company and person are generally from known, established HN users, and align with frequently-expressed sentiments among the general public. And the complaint is just as often made that "anything remotely critical" about that company and person is flagged. If posts about the topic are being downvoted and flagged, it's mostly because that person and company are in the news so frequently that most commentary about them is repetitive, sensationalist and uninteresting, and thus off topic for HN.
What a great website. Thanks for the data! And good work
Or everyone is just tired of tesla and their stubborn camera only tech that will fail in higher autonomy cases?
No no it's the cabal...
Could be lurkers triggered
From the article:
> pricing below US $200. That’s less than half of typical prices now, and it’s not even the full extent of the company’s ambition.
This means there are sensors available for like $500 or more. At 4 per car, this is still just $2000, which is a very reasonable cost add even for a midrange car.
And with price comparisons like this, I'm sure Chinese competitors aren't factored in, I'm sure the Chinese have stuff for cheaper.
So Affordable Lidar is not a limitation. Despite that, self-driving doesn't really exist outside of Waymo, which people take to assume that Lidar is their killer advantage, but with other cars having Lidar, I think that might not turn out to be the deciding facotr.
I'm not sure anyone today really thinks self driving hinges on the hardware. Comma does a surprisingly good job with very minimal hardware (in the form factor of an old Tom Tom!). The advantage is really the device's processing power (cramming enough compute in without making it crazy expensive) and the data that the manufacturer has about the environment and training data to handle edge cases. You can't just buy those things, because the people that have them would be your competitors.
When every car has LIDAR will they all begin to blind each other?
(Insert old man rant “Why are everyone’s headlights so gosh darn bright these days?!”)
There's already cases in SF where a bunch of Waymos are right next to each other, driving around.
That might very well be the case…I’m sure the IR beams are encoded uniquely per LIDAR but it might still blind them… good food for thought!
pew pew https://www.techspot.com/news/108045-lidar-great-cars-but-ca...
The article is a bit muddy on what is hope and what is product. Can we _really_ buy a solid state lidar today? At what cost? When can I have it delivered?
The article starts out without saying it but my takeaway at the end is "Not $200" and "Not in the near future"?
I never understood why Tesla HAD to get rid of the Lidars. Expensive today sure, but can you imagine all that training data they missed out on? Technology has a way of becoming cheaper and cheaper. It seemed short sighted, even if at a loss, again, the training data.
If the pros of having a camera are monumental, then couldn't the video and lidar be combined to be even greater?
Because Tesla Clown-in-Chief asked if humans could drive with just visual input, why can't a Tesla? C-in-C conveniently ignored that, to begin with, humans have binocular vision, and his cars had none. Also conveniently ignored were the facts that human eyes have immense dynamic range, are self-cleaning, and can move to track objects of interest. On top of this, humans also have hearing, which helps gauge danger. Many of these things could be filled in by Lidar but since C-in-C apparently had a revelation from heaven, possibly caused by drugs, lidar had to go.
They never used LiDAR. They removed the radar.
I really wish that companies would just sell their products instead of doing the business relationship 2-step. It is an unnecessary waste of time to sell product.
It looks like these sensors have just enough range to be effective for lidar terrain scanning. I would have bought a Movia S right now just to try it out.
I hate this as well, but there are valid business reasons:
- Setting up infrastructure and support for consumers is expensive and hard to do well, especially if that's not your main industry.
- Some products are only economical if mass produced, and that requires large, guaranteed buyers.
Isn't LIDAR a high powered laser? How could they just go selling it to consumers like you?
It is a Class 1 laser. I can buy a Class 4 laser online that I can start fires with. Laser danger is not the reason.
Cameras alone can handle the vast majority of nominal driving scenarios, but the long tail of safety critical edge cases is where progress slows dramatically. Many of these cases are driven by degraded or ambiguous perception, which is where multi‑modal sensing, such as combining cameras with lidar, can reduce uncertainty. In adverse weather like fog or heavy rain, that reduction in uncertainty can translate directly into safer behavior, such as earlier and more confident emergency braking, even if no single sensor performs perfectly on its own
Laser safety people: how concerned should we be about city streets full of aggressively cost-engineered Lidar emitters?
Basically not.
Biggest risk is that a beam steering element stops while the emitters are running. Basically impossible with a phased array emitter like the article discusses.
And you'd probably have to be staring into the laser at close range while it was doing that.
The laser beams usually aren't tiny points like your laser pointer. Several centimeters across is more typical, especially at typical road distances. Your pupil is very small in comparison.
The optical hazard calculations are a very early part of the design of a LIDAR system, and all of this does get considered. Or should anyway.
Biggest risks are for people involved in R&D, where beams may be static and very close to personnel.
Just to be clear, this article is talking about the possibility that this might happen one day. LiDAR remains prohibitively expensive AFAICT.
This is not quite true. It depends what you're talking about. Automotive LiDAR sensor prices typically range from $150–300 today for standard units. Mid-range ADAS systems (Ls+/L3) sensors are about $600-750 and the long-range units used by robo-taxis like Waymos are about $1,500–6,000 or more per sensor
https://www.fleetowner.com/technology/article/55316670
The ~$75k per sensor in 2015 refers to the long-range sensors. 99% of production is from 4 Chinese companies: Hesai, RoboSense, Huawei, and Seyond.
I still believe in Cameras. I have a comma.ai 3x and it works really well. Just get a thermal camera to deal with fog etc. Waymo has some of the same limitations with cameras that Comma and Tesla does.
There's no reason to believe in just cameras. Cameras are easily blinded by glare and have their efficacy drop dramatically when they get dirty. Having inexpensive lidar AND cameras is the best of both worlds. When it comes to safety and comfort, we shouldn't be trying to optimize for cost. If we figure out how to make cameras alone bulletproof in the future, great. But there's not where we're at today.
I think that supports most people’s viewpoint though. Visible light Cameras alone can ‘work’ but more sensors is of course better. You infrared example for instance.
The only reason not to have more sensors of different types is cost (equipment and processing costs). Those costs are coming down fast.
Same limitations as radar and ultrasonic and ladar and vision cameras combined?
Even Tesla used to have radar and ultrasonic in their cars until relatively recently. And they use lidar (from Luminar) in their mapping fleet.
Anyone know what the ballpark total marginal cost to a consumer for increasing the BOM of a car for $1 is?
Radar is extremely expensive, and lifar is just below that.
Glad to see someone lowering the cost of this technology, and hope to see lots of engineers using this tech as a result.
We might even see a boom in LIDAR tech as a result
What makes you say radar is extremely expensive? Virtually every car from the last decade has at least one, many have two or more. They’re barely more than a PCB and a radar ASIC.
If you want to compete with LIDAR, you need high resolution 4D (range, velocity, azimuth, and height) RADAR. Those are usually phased arrays with expensive phase sensitive electronics, and behind that a chip that can do a lot of Fourier transforms very quickly.
The cheap RADAR devices you're talking about usually only output range and velocity, sometimes for a handful of rather large azimuth slices. That doesn't compete with LIDAR at all.
I wonder if this could be adapted to the vtuber market. Saw a vtuber body tracker being marketed at $11k recently.
Below is one of the comments poster to original article, reading it makes me think that most of the whole article has been regurgitated by some AI:
>"This misleading article contains numerous factual errors regarding automotive lidar. Here are the most glaring:
There are multiple manufacturers, including Hesai, that use mechanical means for at least one scan axis and are already sold for a fraction of the "$10k - $20k" price noted by the author. Luminar itself built this class of scanners before going bankrupt.
Per Microvision's own website, the Movia-S does not use a phased array and also does not have a range anywhere near 200m.
Velodyne and Luminar do not even exist as companies anymore. Both have gone bankrupt and been acquired by competitors."
Is this Human safe at these volumes? There was a time you could get your feet sized by putting them into an X-ray box at the shoe store. Removed from stores once the harm was known.
Well, the energy levels used in those devices should be miniscule, and the wavelengths used are well studies. The problem with x-rays - was lack of studies on health effects, and regulations on those effects. I think, since that time, we've studies radiation (be it light, rf or other parts of spectrum) much more. There is indeed a possibility that we're overlooking some bio-electromagnetic interaction effects; for instance now there is some evidence that led lights might not be harmless - but again, it's not the they affect biological structures somehow, but the lack of spectral components has some effects. It is an interesting topic to research. But, the lidar "should" be safe
The main damage risk from LIDAR is to retinal rods and cones. You just know some jerk is going to overclock his system and we know some people just don't care about the harm they cause so long as they get a benefit. As a combo that means I'll be wearing protective eyewear outdoors the day this tech comes to the roads.
Would it be no easier to integrate it into home vaccuums ?
What is this author even doing with these numbers?
can I buy it on digikey yet?
BTW what happens when there are hundreds of Lidar signals at one intersection?
There's no way a sensor can tell if a signal was from its origin?
Guessing any signal should be treated as untrusted until verified but I suspect coders won't be doing that unless it's easy
I saw a Waymo in Seattle, today. If Waymo can get Seattle right, that gives me a lot of confidence that their stack is very capable of difficult road conditions.
Note: I have not had the pleasure of riding in one yet, but from what my friend in SJ says, it’s very convenient and confidence-inspiring.
Seattle probably isn't any harder than SF, other than the occasional weather event where the hills ice over and we get a bunch of funny (and scary) videos.
I have had the pleasure of riding a few times in SanFrancisco.
The drive was delightful and felt really safe. It handled the SF terrain, traffic and mixed traffic like trams very well.
I wouldnt trust a self driving tesla ( or any camera only systems) though!
I took the Waymo from San Jose airport to home on the peninsula. It took the 101 highway back for the most part, driving very conservatively at 65-55 mph, and in the right most lane. It still has a few quirks though. When there aren't any cars around it will speed up to 65 mph, but at on-ramps, it will slow down to 55 and then speed up once past. It will get stuck behind slow drivers being in the right most lane and patiently follow them a few car length behind them. On the plus side, the lidar stack field of view as shown on the internal display seems to see pretty far down the highway.
Tesla doesnt have Lidar?
No. They don't even have radar, camera is all you need, as per Elon.
Even more fury-inducing, they don't even have ultrasonic parking sensors on cars that have ultrasonic parking sensors. They disabled them to move to a vision-only stack that is no where near as accurate or as good and which categorically cannot tell a difference in ground truth has occurred in its blind spot. But hey, all _people_ need are two cameras, right?
hooboy, https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
That's wild!
Why wouldn't you trust a Telsa, millions of people let there Tesla drive them all over USA (not geofences like Waymo) without touching the wheel from parking spot to parking spot everyday. Have you tried it?
Maybe because of the multiple investigations Tesla has currently due to crashes, deaths, injuries, etc. all caused by "whoops our cameras were fooled by some glare/fog and accelerated into a truck/pole"
Those are mainly autopilot which people conflate with FSD, and high percentage are human caused accidents (auto pilot requires full attention and driver is liable).
Why does Tesla ship a feature called "autopilot" which kills you if you use it instead of "FSD"?
Autopilot is Tesla’s brand name for adaptive cruise control with lane centering. This is a common feature available on a wide range of vehicles from nearly every major manufacturer, though marketed under different names (e.g., ProPilot, BlueCruise).
Drivers can and do misuse adaptive cruise control systems, sometimes with fatal consequences. Memes aside, there is no strong evidence that fatal misuse occurs more frequently by owners of Tesla cars than with comparable systems from other brands.
This perception reflects the Baader–Meinhof phenomenon, more commonly known as the frequency illusion. Nobody is collecting statistics for other brands, so it’s assumed the phenomenon doesn’t occur.
A similar pattern occurred with media coverage of EV fires. Except in this case, good statistics exist which prove the opposite: ICE vehicles catch fire more often than EVs.
> Why wouldn't you trust a Telsa, millions of people let there Tesla drive them all over USA (not geofences like Waymo)
I own a Tesla and paid about $10K for the full self driving capability a few years ago. Yeah, I would not trust a Tesla to drive me from airport to my house. There is a reason Tesla is still stuck at level 2 autonomy certification and not 3, 4 or 5.
I would agree for most Teslas on the road. However, the very latest (HW4) cars are significantly better at FSD where I would nearly trust it now. Most of those older (pre-2023?) cars will not have their hardware upgraded so they'll still have FSD that drives like an idiot!
Because it is not real autonomous driving? Being liable for software that you can neither verify nor trust is THE dealbreaker. Once Tesla says "We are liable for all accidents with FSD" with higher level autonomous driving this game changes. But Waymo is just way more reliable.
> millions of people let there Tesla drive them all over USA
There aren't a million Teslas with FSD active in the US. According to Tesla in their latest earnings report there are 1.1 million people worldwide with FSD.
What? You get Chinese lidar sensors for 12 EUR for a long time already.
How could I buy one?
It might, but comma.ai proves that lidar is red herring, which is further supported by the fact that Waymo are able to drive vision-only if necessary.
No one really disputes that some level of autonomous driving is possible with only cameras, it's a matter of how safe and sure you wanna be.
> comma.ai proves that lidar is red herring
I mean it doesn't. If you actually look at it comma.ai proves that level two doesn't require lidar. Thats not the same as full speed safe autonomy.
whilst it is possible to drive vision only (assuming the right array of cameras (ie not the way tesla have done it) lidar gives you a low latency source of depth that can correct vision mistakes. Its also much less energy intensive to work out if an object is dangerous, and on a collision course.
To do that in vision, you need to work out what the object is (ie is it a shadow) then you have to triangulate it. That requires continuous camera calibration, and is all that easy. If you have a depth "prior" ie, yes its real, yes its large and yes its going to collide, its much much more simple to use vision to work out what to do.
It's fair to point out that comma.ai is SAE level two system, however it's not geofenced at all, which is an SAE level 5 requirement. But really that brings up the fact that SAE's levels aren't the right ones, merely the ones they chose to define since they're the standards body. A better set of levels are the seven I go into more detail about on my blog.
As far as distinguishing shadows on the road, that's what radar is for. Shadows on the road as seen by the vision system don't show up on radar as something the vehicle will run into.
Your autonomy scale is pretty arbitrary and encodes assumptions about the underlying technology and environments the vehicle is supposed to implement and operate in.
The SAE autonomy scale is about dividing responsibility between the driver and the assistance system. The lowest revel represents full responsibility on the driver and the highest level represents full responsibility on the system.
If there is a geofenced transportation system like the Vegas loop and the cars can drive without a human driver, then that is a level 5 system. By the way, geofencing is not an "SAE level 5" requirement. Geofencing is a tool to make it easier to reach requirements by reducing the scope of what full autonomy represents.
will Musk backtrack on the whole CV enough, that's how humans do it if price becomes this low?
To be fair, Musk was only parroting what Karpathy was telling him so you should ask him how self driving cars are supposed to work with CV only.
Well he's also argued that just using CV reduces sensor contention and he claims it improves performance and release velocity, which is why they also got rid of radar and ultrasonic sensors. I am doubtful although it'll be interesting to see regardless
Oh hell yeah, we can finally stop the braindead attempts to make a safe self-driving car with just cameras.
Tesla actually re-introduced radar sensors in HW4. https://www.teslarati.com/tesla-hardware-4-hd-radar-first-lo...
They might not use them for autopilot, but maybe for some emergency braking stuff, when everything else failed.
I wouldn't be surprised if this was a better solution. I think while radar might have a worse spatial resolution, it's depth perception, speed measurement capability, and general robustness to adverse weather might make it a better complementary sensor.
Lidar struggles with things like rain and snow way worse than cameras do.
Is there anyone using only cameras except Tesla?
Xpeng, Wayne, aiMotive to name three. Probably many others, who claim to use LIDAR but don’t actually rely on it. Because LIDAR is perceived as a prerequisite for autonomous safety, admitting to not needing it is a bad PR move — for now.
There is a massive technical difference between Vision first but with LiDAR redundancy vs No LiDAR at all that is Tesla approach. Those are not the same architecture. So claiming XPeng, Waymo, or aiMotive validate Tesla is technically misleading.
XPeng system is sensor fusion. It is not camera only. Waymo is even clearer. For them LiDAR is not optional. aiMotive has now started to market camera only, but its experimental, no production deployments.
Xpeng is abandoning sensor fusion. aiMotive has never bothered with sensor fusion. I never mentioned Waymo; unfortunately the AI gods at Apple auto-corrected me typing Wayve, as in Wayve Technologies Ltd.
Tesla FSD is not accurately described as a "no LIDAR at all" approach, and claiming it as such is technically misleading.
Tesla is using radars as well as cameras. No one is using only cameras
Nope...
Yes, silly using just cameras, I mean humans have Lidar sensors, that why they can drive, why didn't new just copy that....oh wait.
It all seriousness though, Tesla are producing cyber cabs now which are 10th the price of Waymo's and can drive autonomously anywhere in the world. I think we can see where this is going. (Hint: not well for Waymo)
Also the article is speculative 'MicroVision says its sensor could one day break the $100 barrier'. One day...
Humans also don't have wheels, but we build objects with wheels. It is as if we can build objects that don't resemble humans for specific purposes. Crazy...
> Tesla are producing cyber cabs now which are 10th the price of Waymo's and can drive autonomously anywhere in the world.
My understanding is that cyber cabs still need safety drivers to operate, is that not the case?
They have no steering wheel or pedals so no
Yes, but they are useless, they can't steer, hence why they have more accidents than humans per driven miles.
Robotaxis in Austin are in the process of removing in car safety monitors, there is a chance you would get one today
They are just moving the safety monitor in a car that drives behind you.
https://electrek.co/2026/01/22/tesla-didnt-remove-the-robota...
It would be funny, but tbh it's just sad.
Everything for the stock pump
tesla robotaxi crash rates are also currently (as in, with safety drivers) 4x higher than humans so that's not very promising
> Tesla are producing cyber cabs now which are 10th the price of Waymo's and can drive autonomously anywhere in the world.
Wait what? when did they actually enter mass production?
> I mean humans have Lidar sensors
Real time slam is actually pretty good, the hard part is reliable object detection using just vision. Tesla's forward facing cameras are effectively monocular, which means that its much much harder to get depth (its not impossible but moving objects are much more difficult to observe if you only have cameras aligned on the same plane with no real parallax)
Ultimately Musk is right, you probably don't need lidar to drive safely. but its far more simple and easier to do if you have Lidar. Its also safer. Musk said "lidars are a crutch", not because he is some sort of genius, Its obvious that SLAM only driving is the way forward since the mid 00's (of not earlier). The reason he said it is because he thought he could save money not having lidar. The problem for him is that he didn't do the research to see how far away proper machine perception is to account for the last 1% in accuracy needed to make vision only safe and reliable.
> Tesla's forward facing cameras are effectively monocular
Notably, human perception is effectively monocular in driving situations at distances of 60 feet or farther. It's best in the area where your limbs can reach.
We don't need stereoscopic vision to drive.
> Wait what? when did they actually enter mass production?
"mass" is a strong word but the first one came off their production line 5 days ago
ramp to high volume will probably be extremely slow
Not mass production yet, but the first one rolled off the completed assembly line at giga texas last week
Sensor fusion is not far simpler, when the sensors disagree, and they will often, you have to pick which to trust.
It is amazing to see how many people here are confident they know the one true way to build autonomous systems based on nothing but wanting to confirm their biases
This is a weirdly tired counterpoint that Elon and Elonstans like to bandy about as if it's an apples to apples comparison. Humans have a weirdly ultra-high-dynamic-range binocular vision system mounted on an advanced ptz/swivel gimbal that allows for a great degree of freedom of movement, parallax effects, and a complex heuristic system for analyzing vision data.
The Tesla FSD system has... well, sure, a few more cameras, but they're low resolution, and in inconveniently fixed locations.
My alley has an occlusion at the corner where it connects to the main road: a very tall, very ample bush that basically makes it impossible to authoritatively check oncoming traffic to my left. I, a human, can determine that if I see the light flicker even slightly as it filters through the bushes, that the path is not clear: a car is likely causing that very slight change in light. My Tesla has no clue at all that that's happening. And worse, the perpendicular camera responsible for checking cross-traffic is mounted _behind my head_ on the b-pillar, in a fixed location that means that without nosing my car _into_ the travel lane, there is literally no way for it to be sure the path is clear.
This edge case is navigated near-perfectly by Waymo, since its roof-mounted lidar can see above and beyond the bush and determine that the path is clear. And to hit back on the "Tesla is making cheaper cars that can drive autonomously anywhere in the world": I mean, they still aren't? Not authoritatively. Not authoritatively enough that they aren't seeing all sorts of interventions in the few "driverless" trials they're doing in Austin. Not authoritatively enough when I have my Tesla FSD to glory. It works well enough on the fat part of the bell curve, but those edges will get you, and a vision only system means that it is extremely brittle in certain conditions and with certain failure modes, that a lidar/radar backup help _enhance_.
Moreover, Waymo has brought lidar development in-house, they're working to dramatically reduce their vehicle platform cost by reducing some redundant sensors, and they can now simulate a ground truth model of an absurd number of edge cases and odd scenarios, as well as simulate different conditions for real-world locations in parallel with their new world modeling systems.
None of which reads to me as "not going well for Waymo." Waymo completes over 450,000 fully autonomous rides per week right now. They're dramatically lowering their own barriers to new cities/geographies/conditions, and they're pushing down the cost per unit substantially. Yeah, it won't get to be as cheap as Tesla owning the entire means of production, but I'm still extremely bullish on Waymo being the frontrunner for autonomous driving for the foreseeable future.
Waymos are still making lots of errors that a human wouldn't (Stopping in middle of a road due to a puddle was a recent one https://dmnews.co.uk/waymo-robotaxi-spotted-unable-to-cross-...) 17 years after betting on LIDAR, I think Tesla is ahead now in most respects. It's could be wrong though we will probably know by the end of this year.
> I think Tesla is ahead now in most respects
Do you actually own a Tesla? I do. With FSD. And let me assure you, you are very wrong.
How old? The 2023+ models with HW4 are pretty good at FSD. A 2021 model with HW3 was scary bad when I tried it about a year and a half ago.
> I tried it about a year and a half ago.
So, you do not own a Tesla.
> My Tesla has no clue at all that that's happening. And worse, the perpendicular camera responsible for checking cross-traffic is mounted _behind my head_ on the b-pillar
It has a wide angle camera in front that you usually can never see outside service menu. It should cover that case.
> Yes, silly using just cameras, I mean humans have Lidar sensors, that why they can drive, why didn't new just copy that....oh wait.
Humans don't have wheels and cannot go 70MPH. Humans also don't have rear view cameras and cannot process video feeds from 8 cameras simultaneously. The point of these machines is to be better than humans for transportation. If adding LIDAR means that these vehicles can see better than humans and avoid accidents that humans do get into, then I for one want them in my vehicle.
The human brain is a product of millions of years of dealing with spatial problems for survival — and most individual humans are the product of thousands of hours of experience using it to navigate the physical world.
We're always getting closer at emulating this, but we're still a ways off from matching it.
I don't understand what you're saying.
Stereo based depth mapping is kind of bad, especially so if it is not IR assisted. The quality you get from Lidar out of the box is crazy good in comparison.
What you can do is train a model using both the camera and Lidar data to produce a good disparity and depth map but this just means you're using more Lidar not less.
>It all seriousness though, Tesla are producing cyber cabs now which are 10th the price of Waymo's and can drive autonomously anywhere in the world. I think we can see where this is going. (Hint: not well for Waymo)
This feels like a highly misleading claim that might technically be true in the sense that there are less restrictions, but a reduction in restrictions doesn't imply an increase in capability.
The comment about Waymo seems to be particularly myopic. Waymo has self driving technology and is operating as a financially successful business. There is no conceivable situation where the mere existence of competition with almost the same capabilities would shake that up. Why isn't it companies like Uber, who have significantly fallen behind, that are in trouble?
>Also the article is speculative 'MicroVision says its sensor could one day break the $100 barrier'. One day...
And so is the comment about Tesla cyber cabs.
Humans cannot drive safely. Human drivers kill someone every 26 seconds. Waymos have never killed a person.
Part of that is that humans are distractible, and their performance can be degraded in many ways, and that silicon thinks faster than meat.
But part of it is the sensor suite. Look at Waymo vs Tesla robotaxi accident rates.
The brains (ai models) are more important than the sensors. Cameras are good enough. Lidar doesn’t keep Waymos from driving into an 18” deep puddle, or driving the wrong way down the street. Lidar doesn’t help predict when a pedestrian is going to try to cross the street. Lidar doesn’t give the car the common sense to slow down because a child just ran behind a parked car and will soon be coming out the other side.