Skip to content

Driverless Car: What Does It See?

January 22, 2014

driverless

If a car drives itself, what does it see? How does it compare with you?

Ford and MIT are teaming up to help the car do better. A big challenge: How to see around an obstacle, to see what other obstacles lie ahead. My first thought is, the human driver may see more distractions, like the reflections on the hood and the shadows on the ground. On the other hand, the driverless car might miss “potential” hazards farther off. What do you think?

4 Comments
  1. January 22, 2014 6:05 pm

    The main thing is that in year 1, it might be about as good as a person. Year 2 it’s twice as good at handling hazards. Year 3 models should be 4X, etc., and possibly software has improved the Year 1 models. By Year 10, they’ve harvested us for power to run bitcoin mining rigs.

    I think the legal and socialogical concerns are more important: Can an inebriated person get in a driverless car? Can you send you kids to school in one while you stay at home in your bathrobe? How many truck, bus, taxi and limo drivers lose their jobs?

  2. January 22, 2014 6:30 pm

    I kind of agree with joel. What the car sees, and how it uses that information is less important than performance. Theoretically, in a world of high density “internet of things”, the car may “see” very little, but all nearby objects are identified and used in the driving algorithm. Even if the car saw exactly as we do – no radar or infrared, it is how the information is interpreted that is important. A car with radar need not see much, simply use direct data to determine size and relatively velocity and acceleration of other objects nearby, whilst we use a number of visual cues to get a fuzzier picture of the same data.

  3. SFreader permalink
    January 23, 2014 5:20 pm

    If automated cars had to have a person ‘captaining’ it, i.e., inputting some criteria and able to easily over-ride the system, then such cars would only represent a more reliable navigation and fuel system. OTOH, if automated cars could be dispatched to pick up the dry cleaning without any human driver, then I think there’s a potential for human/social harm.

    A human driver/captain would be able to perceive information that, while non-threatening to the car/passenger, is potentially or actually threatening (or interesting) to someone else, for example, someone in distress by the roadside, or a very distant off-road event. (Imagine — no youtube footage of that meteorite that hit Russia last year.)

    Another thing that I wonder about re: driverless cars is the decision-making basis (programming) of their ‘logic/behavior’. I see three potential basic designs: (1) ‘locust’ – where only an environmental scan of the nearby/proximate is used to drive the car, (2) ‘eagle’ – where both short and long distance scans are constantly being evaluated, or, (3) ‘octopus’ – completely centralized control over all vehicles, private, public and commercial.

    At present, I can see both benefits and risks … the insurance companies would be involved if only to help identify riskiest scenarios.

  4. January 23, 2014 9:49 pm

    I think something between “eagle” and “octopus” is where we’re headed. Already they’re talking about automated taxi service–there’s octopus for you (and putting lots of immigrants out of work.) In fact, there will be automated planes up there. From what I understand, the entire airspace–let alone ground map–is already tracked by satellites.

Comments are closed.

Follow

Get every new post delivered to your Inbox.

Join 58 other followers

%d bloggers like this: