Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

JT2002TJ

macrumors 68020
Nov 7, 2013
2,057
1,386
I don't know, I have two cameras (eyes) and no radar (my bald head appears to be a radar dome, but it is not) and I do pretty good.
Me = developed vision software that scored 112 IQ on Raven's (that's ridiculously good, if you are in the know). I think it is mostly a software thing that will be figured out. To be clear, I do not disagree with keeping lidar / radar; although it creates cross dependencies and complications that makes software development and troubleshooting more difficult.

If it can be solved without radar/lidar it would be better for costs. As you said, it is just a software problem. We cannot expect $100k+ worth of sensors and hardware to be adopted by the average person. Especially if it requires frequent recalibrations and has a short lifespan (lidar).

So far FSDs vision based is doing fantastic. It would be nice if it was further along, but it is moving.

Robotaxies, I get the necessity for these expensive sensors, there is no human available to take over in the event of an issue.
 

JT2002TJ

macrumors 68020
Nov 7, 2013
2,057
1,386

Yeah. You can't take vehicles not setup for Police and expect them to function for Police... Ford has an entire division setup to have their vehicles ready for Police, Tesla does not. So, they took something NOT setup for Police (3rd party company setup lights, bumper guards, ballistic stuff, but there is no OS integration for police needs), set it up to fail, then said, see, it failed...
 
Last edited:

960design

macrumors 68040
Apr 17, 2012
3,793
1,670
Destin, FL
Yeah. You can't take vehicles not setup for Police and expect them to function for Police... Ford has an entire division setup to have their vehicles ready for Police, Tesla does not. So, they took something NOT setup for Police (3rd party company setup lights, bumper guards, ballistic stuff, but there is on OS integration for police needs), set it up to fail, then said, see, it failed...
horse before the carriage.... it is easy to agree with a 'good idea', much more difficult to implement. Sadly, the 'good idea' people significantly outnumber the implementers.
 

oneMadRssn

macrumors 603
Sep 8, 2011
6,083
14,193
I don't know, I have two cameras (eyes) and no radar (my bald head appears to be a radar dome, but it is not) and I do pretty good.
Yea, but surely you know and agree that your eyes are FAR more capable than a CMOS sensor, and your brain is far more sophisticated (and literally thousands of orders of magnitude more efficient) at processing that data than even the most powerful supercomputers.

I think it is mostly a software thing that will be figured out. To be clear, I do not disagree with keeping lidar / radar; although it creates cross dependencies and complications that makes software development and troubleshooting more difficult.
I don't doubt that software will eventually get there, but I think that horizon is further off than most people assume. People mistakenly think that AI is some magic bullet that solves incredibly difficult software problems--it's not like that as I'm sure you know.

There is also something to be said for keeping it simple stupid. Radar was invented nearly 100 years ago, we've got it nearly perfected today. Lidar was invented over 50 years ago, and has gotten to the point where every Apple iPhone Pro has had it built-in for 2 generations now. Why spend immense resources (money, compute power, human time, etc.) trying to solve problems in vision software that can be solved using existing tech instead? Leave that to the academics; operating companies should focus on getting practical results.
 
Last edited:
  • Like
Reactions: Analog Kid

oneMadRssn

macrumors 603
Sep 8, 2011
6,083
14,193
That's an excellent observation. Big Pharma is doing well with AI generated medications! I would add that AI generated instagram models are not doing too shabby. ;)

My company is working hard at finding the proper niche for AI solutions. Right now it feels more like square pegs being shoved in round holes with the only tool available, a hammer. Clients only appear to know 'buzzwords', but do not understand what they mean. It becomes difficult to not laugh at the ignorance and buddy promotion (CEO to CTO to CFO) on buzzword salads. (Is there a weekly buzzword handout that gets delivered to CEOs?).
I can talk all day long about the adoption of AI.

Not just big pharma by the way, but service providers and medical device companies are putting AI to good use too. Imaging and radiology is being revolutionized by AI.

I get new AI vendors bugging me literally every week right now, trying to sell me access to their new AI tool or AI platform that will supposedly make my job incredibly efficient. It's all (a) mostly bull, and (b) wildly expensive. I think (a) will eventually be solved through improved and iterative software provided these companies have the financial stamina. (b) however is a huge problem for AI. The hardware costs to run AI (more chips, more datacenters, more power, more cooling capacity which itself uses more power) is growing much faster than the improvements to efficiency. The cost of tokens is barely going down, while the demand for tokens is rising like crazy as people shove more and more into AI. It's not sustainable, and as of right now, I don't think anyone (except healthcare) can deliver a value proposition where the value of their AI services exceeds the cost of running it; and certainly not any automotive company.
 

JT2002TJ

macrumors 68020
Nov 7, 2013
2,057
1,386
There is also something to be said for keeping it simple stupid. Radar was invented nearly 100 years ago, we've got it nearly perfected today. Lidar was invented over 50 years ago, and has gotten to the point where every Apple iPhone Pro has had it built-in for 2 generations now. Why spend immense resources (money, compute power, human time, etc.) trying to solve problems in vision software that are already solved? Leave that to the academics; operating companies should focus on getting practical results.

Because both Radar and Lidar add more complexity and speedbumps to the problem. They also require much more computing power to process in addition with vision. The problem remains, what do you do if 1 or 2 out of 3 report an obstruction?

We have 2 eyes, and humans have an attention issue that a vision only vehicle don't have. They also can interpret using all cameras, not just what our physical eyes can see. So, they don't actually need to see as well as us, since they have more eyes than us.
 
  • Like
Reactions: I7guy

I7guy

macrumors Nehalem
Nov 30, 2013
35,142
25,212
Gotta be in it to win it
Because both Radar and Lidar add more complexity and speedbumps to the problem. They also require much more computing power to process in addition with vision. The problem remains, what do you do if 1 or 2 out of 3 report an obstruction?

We have 2 eyes, and humans have an attention issue that a vision only vehicle don't have. They also can interpret using all cameras, not just what our physical eyes can see. So, they don't actually need to see as well as us, since they have more eyes than us.
Not to mention loading a vehicle down with expensive equipment is not as cost effective and maybe not even safer as using cameras and lots of AI. I’m on the forefront of AI in two industries and an impartial observer to what Tesla is doing. It’s exciting and imo better than waymo.

Clearly have strayed way over the line as to the forum topic and obviously there is no objective facts here as no one knowsnifnTesla is in fact going to succeed or fail.
 

oneMadRssn

macrumors 603
Sep 8, 2011
6,083
14,193
The problem remains, what do you do if 1 or 2 out of 3 report an obstruction?
In a mission-critical system like self-driving, the answer is pretty clear - sound the alarm, stop the vehicle, require human intervention. The point of redundant sensors is not to ensure they're always in perfect agreement, the point is to have wider coverage so that hopefully one of the sensors sees what needs to be seen when the others fail to see it.
 

JT2002TJ

macrumors 68020
Nov 7, 2013
2,057
1,386
Not to mention loading a vehicle down with expensive equipment is not as cost effective and maybe not even safer as using cameras and lots of AI. I’m on the forefront of AI in two industries and an impartial observer to what Tesla is doing. It’s exciting and imo better than waymo.

I had and used FSDb before v12 and still use it after v12. The fact that they completely scrapped the v11 code and start over using AI and it is where it is so quickly, it amazing... It was a HUGE jump in everything.

The problem is the if Tesla is doing it, it must be bad and won't work mentality.

We need vehicles to be able to drive us, all levels L1 - L5. Taxis AND personal vehicles. If a driver isn't there to take over, we need a different set of rules. But all levels need to exist. L4-L5 doesn't have to be affordable, but L1-L3 need to be.

Waymo/Cruise can also exist in a world where FSDs exists. They are different markets. We need both to be solved.
 
  • Like
Reactions: I7guy

JT2002TJ

macrumors 68020
Nov 7, 2013
2,057
1,386
In a mission-critical system like self-driving, the answer is pretty clear - sound the alarm, stop the vehicle, require human intervention. The point of redundant sensors is not to ensure they're always in perfect agreement, the point is to have wider coverage so that hopefully one of the sensors sees what needs to be seen when the others fail to see it.

So if Radar and Lidar "see" a plastic bag floating in the vehicles path, the answer is "sound the alarm, stop the vehicle, require human intervention"?
 

960design

macrumors 68040
Apr 17, 2012
3,793
1,670
Destin, FL
Because both Radar and Lidar add more complexity and speedbumps to the problem. They also require much more computing power to process in addition with vision. The problem remains, what do you do if 1 or 2 out of 3 report an obstruction?

We have 2 eyes, and humans have an attention issue that a vision only vehicle don't have. They also can interpret using all cameras, not just what our physical eyes can see. So, they don't actually need to see as well as us, since they have more eyes than us.
I'm working on that right now. "Attention is all you need", but for vision instead of LLMs. The human eye can only focus on a very small point, the rest is blurred. Software processing can use all of it's processing power on a very small area of the entire FOV, while picking up, and focusing on primarily horizontal movements in the blurred area for a quick 'glance & track'. Computers can 'focus' on more than one area at at time and the outcome may yield impressive gains over current vision tech. [simplified explanation]
 

jz0309

Contributor
Sep 25, 2018
11,318
29,881
SoCal
Wow, so much discussion when I was gone for just a day ;)

re cameras and radar/lidar I’ll just say that comparing cameras to human eyes is just not an “apples to apples” comparison incl the compute power in systems and our brains.
i’ve worked on a Lidar development project for the past 3 years or so, what I will say is that from a technology perspective it can do much more than cameras, but it is far more expensive and thus adoption is slow.

this is a very interesting space, but what is needed is regulation, some of what we have today on the roads is quite frankly scary.

Meanwhile, after another day trip through the greater LA area (250 miles 1 way from west to east) I am enjoying my Ioniq 5 very much
 
  • Like
Reactions: I7guy

oneMadRssn

macrumors 603
Sep 8, 2011
6,083
14,193
So if Radar and Lidar "see" a plastic bag floating in the vehicles path, the answer is "sound the alarm, stop the vehicle, require human intervention"?
Yea! Because as Tesla has shown, a vision system cannot reliably discern between a plastic bag, a lens flare, or black fire hydrant against a dark background at night with the cap painted white.

This should not be a surprising thing. When it comes to full self-driving, even Tesla (not Musk) have admitted that full self-driving requires lidar. https://arstechnica.com/tech-policy...aiming-its-cars-could-fully-drive-themselves/ ("Tesla contends that it should have been obvious to LoSavio that his car needed lidar to self-drive...")
 

JT2002TJ

macrumors 68020
Nov 7, 2013
2,057
1,386
Yea! Because as Tesla has shown, a vision system cannot reliably discern between a plastic bag, a lens flare, or black fire hydrant against a dark background at night with the cap painted white.

This should not be a surprising thing. When it comes to full self-driving, even Tesla (not Musk) have admitted that full self-driving requires lidar. https://arstechnica.com/tech-policy...aiming-its-cars-could-fully-drive-themselves/ ("Tesla contends that it should have been obvious to LoSavio that his car needed lidar to self-drive...")

It was a simple question in response you your statement that you still didn't answer. What should it do?

Radar and Lidar would not be able to determine if that was a bolder, metal object or a bag, while vision sees it as a plastic bag.

Should the default behavior be to stop the vehicle because vision seeing it as a bag is not to be trusted? Does Radar/Lidar take priority over vision? What if the opposite happens, vision sees a rock in the road, but the lidar sensors fail to see an object, do you push through since lidar didn't see it?

The answer to that is really expensive, at least right now. So, the way to solve that is, do vision only, and have the driver make the call.

Tesla is not the only driver assistance system that has/is moved to vision only. Isn't Mobileye vision only?
 

oneMadRssn

macrumors 603
Sep 8, 2011
6,083
14,193
It was a simple question in response you your statement that you still didn't answer. What should it do?
You asked a yes/no question and I literally answered yes.

Radar and Lidar would not be able to determine if that was a bolder, metal object or a bag, while vision sees it as a plastic bag.
Lidar and radar would determine it to be an unexpected object on the road. That's enough information to know to avoid it a collision. It could be a bag, or it could be a scrap of building material that fell off the back of a dump truck.

Put it this way, what % of risk of collision with a solid object that would seriously damage the vehicle and cause an accident are you willing to accept? Because vision-only systems are not going to be 99.999% accurate this decade (and probably not in the next decade either).

Frankly, I am not willing to tolerate any risk of a false negative like that.

Should the default behavior be to stop the vehicle because vision seeing it as a bag is not to be trusted? Does Radar/Lidar take priority over vision? What if the opposite happens, vision sees a rock in the road, but the lidar sensors fail to see an object, do you push through since lidar didn't see it?
The priority is avoiding the risk of false negatives. That means if any sensor in the array detects a problem in the road, then it has to navigate around the problem or revert control to the driver. In other words, never push through - always treat it as a real obstacle untill all sensors agree it isn't.


The answer to that is really expensive, at least right now. So, the way to solve that is, do vision only, and have the driver make the call.
What I said is exactly that - have the driver make the call when there is any issue. I think we agree.

Tesla is not the only driver assistance system that has/is moved to vision only. Isn't Mobileye vision only?
Mobileye has stopped developing their lidar system, but they still use radar sensors to supplement vision. As far as I know, Tesla is the only large company working on truly vision-only.
 

jz0309

Contributor
Sep 25, 2018
11,318
29,881
SoCal
It was a simple question in response you your statement that you still didn't answer. What should it do?

Radar and Lidar would not be able to determine if that was a bolder, metal object or a bag, while vision sees it as a plastic bag.

Should the default behavior be to stop the vehicle because vision seeing it as a bag is not to be trusted? Does Radar/Lidar take priority over vision? What if the opposite happens, vision sees a rock in the road, but the lidar sensors fail to see an object, do you push through since lidar didn't see it?

The answer to that is really expensive, at least right now. So, the way to solve that is, do vision only, and have the driver make the call.

Tesla is not the only driver assistance system that has/is moved to vision only. Isn't Mobileye vision only?
You’re making a wrong assumption here, for a computer, vision is data, not an image like what your eye sees.
both a camera and LiDAR can be trained to recognize a plastic bag, its the computation of the raw data, nothing else.

and yes, LiDAR is much more expensive than a camera today, but you don’t solve the “autonomous driving” problem by only using one or the other because of cost. New technology is typically expensive and in the past it has become more affordable by volume…

but your edge case of the flying plastic bag just shows that autonomous driving is a challenge that has not yet been solved
 

JT2002TJ

macrumors 68020
Nov 7, 2013
2,057
1,386
You asked a yes/no question and I literally answered yes.

Sorry, I missed your answer.

Lidar and radar would determine it to be an unexpected object on the road. That's enough information to know to avoid it a collision. It could be a bag, or it could be a scrap of building material that fell off the back of a dump truck.

Put it this way, what % of risk of collision with a solid object that would seriously damage the vehicle and cause an accident are you willing to accept? Because vision-only systems are not going to be 99.999% accurate this decade (and probably not in the next decade either).

Frankly, I am not willing to tolerate any risk of a false negative like that.

You accept a risk of false negatives all the time when you drive. Your eyes see something, your brain interprets it and then you quickly decide to swerve or not. We often (more than just often) think we see something, then it turns out to be something else.

What I said is exactly that - have the driver make the call when there is any issue. I think we agree.

The problem is, when do you make the driver make the call, when 1 identifies an issue? Is it only when 2 identify an issue but the 3rd doesn't? If you are going to do a fail at 1, why even have additional sensors, just let it fail with vision only and have the driver take over. Save a ton of money, and improve the camera's, processing power, and interpretation of the camera's images?

Mobileye has stopped developing their lidar system, but they still use radar sensors to supplement vision. As far as I know, Tesla is the only large company working on truly vision-only.

My understanding is that Mobileye is vision-only, with the option to add radar. Which Radar isn't going to help much with a static object.
 

cyb3rdud3

macrumors 601
Jun 22, 2014
4,033
2,717
UK
In Europe the most common taxi car brand used to be Mercedes Benz (nowadays it's more diverse despite there still being lots of Mercedes). It's also a very common bus and truck brand.

What's your opinion on Mercedes-Benz?
It’s for taxis and building contractors. 👍 Saying that we have had two AMGs one of which was fun.
 

cyb3rdud3

macrumors 601
Jun 22, 2014
4,033
2,717
UK
Volvo already does this, and it's not as useful as crowdsourced cellphone data. I actually used to work for a division of teleatlas back in the day, and there is pretty much nothing that can possibly beat the traffic and road conditions data that Google and Apple are collecting.
Yup, when our polestar slips a wheel on our gravel driveway, it shows up on Volvos connected safety that there is a hazard in the road 🤣
 

JT2002TJ

macrumors 68020
Nov 7, 2013
2,057
1,386
You’re making a wrong assumption here, for a computer, vision is data, not an image like what your eye sees.
both a camera and LiDAR can be trained to recognize a plastic bag, its the computation of the raw data, nothing else.

and yes, LiDAR is much more expensive than a camera today, but you don’t solve the “autonomous driving” problem by only using one or the other because of cost. New technology is typically expensive and in the past it has become more affordable by volume…

but your edge case of the flying plastic bag just shows that autonomous driving is a challenge that has not yet been solved

Honest question, how can Lidar differentiate if an object is a flat plastic bag vs sheet rock vs a metal plate?

I don't fully think what I am saying is an edge case. Just this morning, while using FSDs on my way to work, my TMY drove past/over a minimum of 3 objects in the road (1 was a plastic bag, one was tire tread, I don't remember what the 3rd was). This is just this morning. Road debris is common. Luckily FSDs didn't stop or need to swerve to avoid them.
 

oneMadRssn

macrumors 603
Sep 8, 2011
6,083
14,193
You accept a risk of false negatives all the time when you drive. Your eyes see something, your brain interprets it and then you quickly decide to swerve or not. We often (more than just often) think we see something, then it turns out to be something else.
It's not the same risk though. I don't ever mistake a parked white box truck for an open sky, and I don't mistake those gray impact attenuators at the exit for a travel lane (two actual mistakes Tesla's vision system reproducibly makes).

I don't dispute that vision only systems can be VERY good for driver assist tech. Because you're right, there are things that humans miss all the time that vision systems will catch, and we're overall safer for it. I love that!

But what Must is proposing is full level-5 self-driving using only vision systems and that is CRAZY to me for the reasons I already stated above and others.


The problem is, when do you make the driver make the call, when 1 identifies an issue? Is it only when 2 identify an issue but the 3rd doesn't? If you are going to do a fail at 1, why even have additional sensors, just let it fail with vision only and have the driver take over. Save a ton of money, and improve the camera's, processing power, and interpretation of the camera's images?
As I said, the reason you supplement vision is to reduce false negatives. Vision systems can falsely determine an open road where there isn't one one BECAUSE they lack depth perception at large distances (stereo cameras can do depth at short distances). Is it a white box truck or open sky above a road? Is that lens flare covering a pedestrian? That's what lidar/radar can answer. It answers the very critical question of "Is there actually a solid object in front of me?"


My understanding is that Mobileye is vision-only, with the option to add radar. Which Radar isn't going to help much with a static object.
My understanding is Mobileye is saying their systems are capable of level-2 self driving, which requires full human attention at all times. Basically, adaptive cruise-control with lane-keeping and the typical driver-assist safety tech. As I said above, I think vision is fine for that. But hardly anyone would consider that to be true self-driving.

Last I heard from their investor info is that their plans for a level-3 system will make use of radar.
 

oneMadRssn

macrumors 603
Sep 8, 2011
6,083
14,193
Honest question, how can Lidar differentiate if an object is a flat plastic bag vs sheet rock vs a metal plate?
Snarky answer: A plastic bag isn't going to remain perfectly flat like a metal plate would. It's soft, it's bends in the wind. You can describe that difference using math.

Better answer for this context: If you have an iPhone 15 Pro or 16 Pro, download one of the several 3D scanning apps that make use of the lidar sensor and try to scan a plastic bag, sheet rock, or metal plate. You'll be able to see there is a difference in how it looks when captured in 3D using the sensor.
 

jz0309

Contributor
Sep 25, 2018
11,318
29,881
SoCal
Honest question, how can Lidar differentiate if an object is a flat plastic bag vs sheet rock vs a metal plate?

I don't fully think what I am saying is an edge case. Just this morning, while using FSDs on my way to work, my TMY drove past/over a minimum of 3 objects in the road (1 was a plastic bag, one was tire tread, I don't remember what the 3rd was). This is just this morning. Road debris is common. Luckily FSDs didn't stop or need to swerve to avoid them.
A plastic bag, in traffic, will likely “fly around”, thus changing its shape and form and LiDAR can detect that.

the thing is that cameras have limitations and whatever system needs to work within those limitations. LiDAR sensors are expensive and I am not aware of any “mass volumes” of those sensors yet, cameras undoubtedly are in high volume production. It is not a one or the other, it’s a combination of both that will give best results. But we’re not there yet
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.