Jump to content

Sign in to follow this  
mello yello

Electric Autonomous Vehicles - Future or Flop ?

Recommended Posts

One step closer... (from Wired)

Quote

Congress Unites (Gasp) to Spread Self-Driving Cars Across America

 

Getty Images

On Wednesday, the House of Representatives did something that’s woefully uncommon these days: It passed a bill with bipartisan support. The bill, called the SELF DRIVE Act, lays out a basic federal framework for autonomous vehicle regulation, signaling that federal lawmakers are finally ready to think seriously about self-driving cars and what they mean for the future of the country.

“With this legislation, innovation can flourish without the heavy hand of government,” said Representative Bob Latta, the Ohio Republican who heads up the Digital Commerce and Consumer Protection Subcommittee, in a floor speech just before the SELF DRIVE Act passed by a two-thirds majority. (And no, I’m not shouting at you—it’s an acronym, for Safely Ensuring Lives Future Deployment and Research In Vehicle Evolution.) The Senate will need to pass its own bill before the legislative framework can become law.

This seems like a good time for Congress to step in, and the famously regulation-averse tech industry has actually welcomed the legislative clarification. Self-driving vehicles have been testing on public roads since 2010, when Google hit the streets near Mountain View, California. And in the absence of congressional oversight, states have stepped in to regulate them, creating a patchwork of at least 21 different state laws and guidelines with different purposes, definitions, and priorities. This is a serious pain for the growing self-driving industry, which aspires to build cars permitted on all public roads.

The industry wants the flexibility to experiment, and it says there’s a lot at stake. This came up in every congressional floor speech on Wednesday: Nearly 40,000 people died on American roads in 2016, and the National Highway Traffic Administration says 94 percent of fatal crashes can be attributed to human error. Let’s get rid of the human, and quick, goes the logic.

But the robocar industry also argues that it’s way too early for Congress to demand strict, particular rules for the vehicles. Companies certainly aren’t selling these vehicles to the public yet, and even the largest AV player, Google’s self-driving spinoff Waymo, only has about 100 Chrysler Pacifica minivans on the road (though that number is growing). In other words: There aren’t that many of these vehicles to regulate, and companies are still figuring out how they should work.

Lawmakers, for their part, hope the legislation strikes a balance between allowing tech and car companies to test whatever, wherever, and giving them enough leeway to try stuff out, collect some data, and determine the best way to operate vehicles without a driver.

“We need to give Congress credit for being both strategic and specific,” says Mark Rosekind, who headed up the National Traffic Safety Administration during the Obama administration and now oversees safety at the self-driving startup Zoox.

So what’s in the bill? And what’s next? Strap on your seatbelt, and say hello to the robot at the wheel:

One Reg to Rule Them All

First, the legislation works out a way for the federal government’s rules to trump state laws and rules. It officially gives the National Highway Traffic Safety Administration power to regulate vehicle design, construction, and performance—the way it does with, well, normal cars. States still have authority over vehicle registration and licensing, but they’ll have a harder time making demands about what goes on inside the car.

Now that NHTSA officially has the power to regulate these things, the legislation gives it a set of deadlines. It has 24 months to come up with rules about what automakers need to submit to the agency to certify they're serious about safety. And it has a year to figure out what features of a self-driving car will need performance standards. How will you know that a car’s sensor configuration—the combination of lasers and cameras that help it “see”—is safe? What about its cybersecurity fail-safes? Or the way it ensures there’s a passenger in the car before taking off? Quoth NHTSA: TBD.

Privacy

Second, the legislation requires autonomous vehicle manufacturers be deliberate about the way they share their passengers’ data. Think about how much a self-driving car company could know about you: where you work, where you live, where you drop your kid off each morning, that you used to go to the gym a lot but stopped about five months ago. Some companies would like to customize these self-driving things to your driving style or preferred nondriving activity. Maybe you like a cautious approach, or to watch a certain kind of show while rocketing toward work. Consumers will probably want that info protected.

Under this bill, these companies must have “privacy plans” describing how they'll collect, use, and store data. They’ll also have to lay out how customers will be informed about what’s happening to their data and what they can do if they don’t want it shared with anyone.

Exemptions

Finally, the legislation makes it a lot easier for self-driving cars to hit the road. Today, Federal Motor Vehicle Safety Standards (FMVSS, for those who are hip with it) govern how vehicles are designed. And because humans drive the vast majority of cars today, the standards are created with humans in mind. Steering wheels are necessary, as are brake pedals. But a car driven by a computer wouldn’t have those features—in fact, allowing humans to intervene by grabbing the wheel could be dangerous.

Today NHTSA has the power to grant 2,500 FMVSS exemptions each year. This legislation will gradually up that number—by a lot. In year one, the federal agency could grant up to 25,000 exemptions; by year two, 50,000; and by years three and four, 100,000. Meaning: More self-driving cars testing on public roads. It’s about to get much more robot-y up in here.

What’s Next?

Well, this is just the first half of this process. Now the Senate has to pass its own bill. Then both houses will work together to come up with compromise legislation that the president can sign. Perhaps easier said than done: A top Senate Democrat told Politico that “there are a number of differences” between the Senate’s draft and what the House just passed.

And then the biggest job of all: NHTSA will need to implement the thing. If the bill passes in its current state, the Department of Transportation and the transportation secretary, Elaine Chao, will have to hit a bunch of deadlines: They have two years to come up with exactly what companies must do to get certified. They’ll have 18 months to kick off the rule-making process for privacy. They have to issue those FMVSS exemptions. And they’ll have to withstand pressure from a bunch of bickering car and tech companies to do it. It’s a big job for a department that doesn’t yet have a nominee for NHTSA administrator—but the future's coming at it, fast.

1UPDATE 6:46 PM ET 9/6/2017: This story has been updated to clarify the legislative process.

2

 

Share this post


Link to post
Share on other sites

Just because some tin-pot third world dictatorship passes legislation to self promote themselves back into some sort of self delusion that they actually matter to ordinary everyday lives every where else in the world  doesnt mean its going to be adopted everywhere

 

Australia simply does not have enough power points and double adaptors to cover a cold night in the Snowy's let alone total battery powered stradal domination from the likes of self proclaimed wannabe billionaires like Elon Musk, disguising as century old recycled idea visionaries, with his Ponzi Scheme imaginary assets

 

yeppers, its no secret i oppose enforced adoption of questionable technology that really serves no purpose other than to make money for a well spent economy and society that has had its day but relies on terrifying people into buying from them

 

at the end of the day, when the oil is dry and spent, there are no residuals, all burned, and all CO2 recycled by natural means, at the end of a battery operated life cycle, what do you do with enough spent batteries to cover a small continent?

 

future or flop ?

flop

 

 

Edited by mello yello

Share this post


Link to post
Share on other sites
On 9/5/2017 at 6:14 PM, proftournesol said:

There's no legitimate reason to park any car with trailer across all the charging bays other than poor charging bay design. The Model S was never certified for towing, however the Model X and 3 are. Tesla just didn't tweak that drivers with trailers may want to tow and at most charging bays there's no way to do it without parking across the bays or by disconnecting your trailer. New bays are being designed with 'trailer attached' charging in mind. There's no reason other than spite for an ICE vehicle to park like that

wrong, on so many fronts, sorry

 

she has no known right to call anybody anything based on that photo, imagine just for one minute what was there before they took away public spaces and built specially "reserved spaces for the elite" ... that is what people will resent

 

 

they took paradise, put up a charging spot

 

Don't it always seem to go
That you don't know what you've got til its gone

 

 

 

Share this post


Link to post
Share on other sites

i dont see spite, i see a guy with a trailer who probably cant park anywhere else wanting to go into that toilet

he has pulled off the road and obviously isnt stopping there long

 

https://i.pinimg.com/736x/6d/39/12/6d3912be8ac3b9231c9b13e3882687a1--electric.jpg

 

i do see an airhead bimbo Tosla nazi wanting to tow the bloke without even bothering to hear why he may have needed to stop there

 

she can charge at home, how selfish is she really ? she wants it all

Share this post


Link to post
Share on other sites


On 05/09/2017 at 2:06 PM, mello yello said:

i dont find that bimbo in the feature story polite by calling people "gasholes" without knowing why they are parked there :)

 

No offence @proftournesol - have to agree here - supercharging is a privilege not a right, and a little grace is free. 

 

Hadn't seen the video; find it quite disturbing.

Edited by rmpfyf

Share this post


Link to post
Share on other sites

A little grace also includes not parking your vehicle in a way that blocks 3 chargers, surely you couldn't think that was reasonable behaviour in any circumstance?

Share this post


Link to post
Share on other sites
9 hours ago, proftournesol said:

A little grace also includes not parking your vehicle in a way that blocks 3 chargers, surely you couldn't think that was reasonable behaviour in any circumstance?

 

I think you're making it ideological (if unintentionally so). Most people don't know what they are, what they do or why they might consider parking somewhere else. Some might think a small amount of time isn't an inconvenience. Some might have extenuating circumstances. From experience, most just don't understand - and a rare few state that if it's not illegal to park there, it's a parking space (which in some cases is technically correct). 

 

No one's going to build understanding by posting a highly conceited video on Youtube with enough license plate details to identify individuals. All that'll build in the US at best is a lawsuit, and the OP would be completely in the wrong if it came to pass. Sorry.

Share this post


Link to post
Share on other sites
8 hours ago, rmpfyf said:

 

I think you're making it ideological (if unintentionally so). Most people don't know what they are, what they do or why they might consider parking somewhere else. Some might think a small amount of time isn't an inconvenience. Some might have extenuating circumstances. From experience, most just don't understand - and a rare few state that if it's not illegal to park there, it's a parking space (which in some cases is technically correct). 

 

No one's going to build understanding by posting a highly conceited video on Youtube with enough license plate details to identify individuals. All that'll build in the US at best is a lawsuit, and the OP would be completely in the wrong if it came to pass. Sorry.

There's nothing ideological about it, it's manners. Parking your car and trailer across 3 parking spaces is poor form irrespective of whether they are charging bays or not and even worse manners if they are charging bays. It's the same as parking your car with the trailer right across the driveway entrance to a petrol station and then walking off to the toilet or wherever. This is in California, it's hard to believe that anyone there hasn't seen Tesla Superchargers or doesn't recognise the Tesla brand name on the top of the frame. The problem, of course, is that there's probably no parking provided for any car towing a trailer, irrrespective of its motive power, 

One good thing about autonomous cars is that they won't park like that

Share this post


Link to post
Share on other sites


45 minutes ago, proftournesol said:

There's nothing ideological about it, it's manners. Parking your car and trailer across 3 parking spaces is poor form irrespective of whether they are charging bays or not and even worse manners if they are charging bays. It's the same as parking your car with the trailer right across the driveway entrance to a petrol station and then walking off to the toilet or wherever. This is in California, it's hard to believe that anyone there hasn't seen Tesla Superchargers or doesn't recognise the Tesla brand name on the top of the frame. The problem, of course, is that there's probably no parking provided for any car towing a trailer, irrrespective of its motive power, 

One good thing about autonomous cars is that they won't park like that

 

Having lived in CA at what's effectively ground zero for Tesla, let me assure you there are still people in CA (or travelling through it) that have zero idea or interest in anything Tesla. Whilst some wake up every day in awe and anticipation of what missive Elon might have tweeted overnight, others manage to live peacefully. It's still part of North America where an unsurprisingly large volume of people choose to buy and drive trucks for their daily commute. To anyone like this it's nothing like parking your car across the entrance to a petrol station. 

 

At any rate you have no idea why they're parked as such. Could be ignorance, animosity, a misunderstanding, a genuine attempt at a best compromise or even a genuine emergency. I would tend to err on any side that involves building understanding.

 

The only asshat in the situation cited is the one with the video camera.

 

Autonomous cars will need to live in an environment where there are other autonomous cars and their programmed imperfections, cars driven by people that understand enough to coexist and those that don't just yet for whatever reason. 

 

If the best those fortunate enough to afford to be early tech adopters can manage is to post pithy videos about those not having their perspective (pithy being a kind term), the future they want is going to take a great deal longer to achieve, no?

Share this post


Link to post
Share on other sites

Oh god, fine, no problem with how that car and trailer parked. stupid me and stupid Tesla drivers for buying expensive cars. Stupid autonomous cars. Fine I'm ouuta the thread,

Share this post


Link to post
Share on other sites
1 hour ago, proftournesol said:

Oh god, fine, no problem with how that car and trailer parked. stupid me and stupid Tesla drivers for buying expensive cars. Stupid autonomous cars. Fine I'm ouuta the thread,

 

Don't do that. No one's saying that the car and trailer weren't parked incorrectly - of course they were. 

 

Simply to try and see it from the other side, rather than to rush to critique it - which is a greater error.

 

Everywhere I've driven an EV there's been inevitably someone parked obstructing something I needed to plug into. I didn't label them something that rhymes with 'a**hole' and shame them publicly, I had a chat. Often this led to a chat about EVs, sometimes a test drive. A few are now EV owners; one still keeps in touch, which is ironic given we met with me somewhat irritable needing to charge a car (and then sleep), and he'd parked his car the first place he could find in a rush to sprint into a toilet and relieve himself of a dinner that hadn't gone down too well; he'd pulled over at the first exit he found and made a beeline for the first building that looked like it might have the necessary facilities. Hilariously we met into the loo when I was swearing blue on the phone to a colleague about a particularly marginal day ending with not being able to charge the car on arrival because of a certain sedan in the way. Some fairly busy and noxious noises from one of the stalls paused with 'dude, sorry, that's my car'. Some fresh air later and he was thankful and sorry and asked enough to warrant a ride the next day, a business card, a referral to the sales team and within weeks he was on the waiting list. Quite possibly the only man introduced to EV ownership whilst taking a dump.

 

Imagine instead if I'd taken a photo of his car with number plate, labelled him something universally offensive and posted a video of myself citing him as a problem for the owners of the certain expensive car I was driving.

 

Make advocacy from lemons and it all moves forwards. 

Share this post


Link to post
Share on other sites

I found this interesting post on the Tesla Motors Club forum on autonomous driving and the differing approaches taken my Mobileye (now Intel) and Google compared to Tesla.

AP1 = Tesla's original Autopilot system, designed by Mobileye (ME)

AP2 = Tesla's in-house designed neural network based Autopilot

It may help explain why Tesla appears to be behind on autonomous driving, but why Musk has decided to go down this path, he believes that it'll end up ahead.

 

Quote

I'm not a pro on the software side, and I only dabble in neural networks so all I can offer is observations and opinions. But here are a few items:

I put 26k miles on AP1 before switching to AP2 and now have about 7k miles on AP2. As of right now it's not hard to find situations where one or the other clearly wins so I can't really make the case that one is superior. Depending on what you care about and how and where you use it you may well have a strong preference for one or the other. Personally I'm pretty happy with AP2 right now but I can certainly understand people who are deeply unsatisfied with it.

From what I've been able to dig up about the vision features in Mobileye's vision design in AP1 and what I can determine about how AP2's vision works from the neural network architecture I think it's fair to describe these two systems as fundamentally different approaches. When ME started development on vision for their system neural networking for vision wasn't a thing. So of course they designed their silicon to be an efficient accelerator of conventional vision heuristics and they programmed it conventionally. From comments the CEO made back in 2014 and 2015 I believe that they hand tuned all of the vision kernels for the system that went into AP1. This approach has advantages and disadvantages. On the upside the kernels are very computationally efficient so you can run with less hardware, which was really important when they started. But the more important difference is that the kernels are "designed".

When something is "designed" it tends to be fairly well understood. If you look at any particular situation where it isn't working you can figure out why and how to fix it. If you take a well defined use case and design a solution for it you can come up with something that works reliably within that use case. I think it's fair to say that ME developed their use case and designed a machine that functioned predictably within that use case. So that's great, but it means you need a well defined use case and it means you have to explicitly design the machine for that use case. And within that use case you'll get predictable behavior. (As an aside, I think Tesla pushed AP1 outside of ME's use case and it's not hard to see why that would be upsetting to ME. They didn't want to see any accidents on Tesla's closely watched vehicle being attributed unfairly to a failure of ME's vision system.)

Now for driving in the real world a single overall use case isn't feasible so you break the problem down into elements and scenarios and you design solutions for each one and then combine them all. It's labor intensive. VERY intensive of VERY expensive expert labor. Google started with similar limitations and a similar approach and has been throwing enormous resources at the problem for over a decade and still doesn't have a production system. They might have one soon, or they might not. Rodney Brooks - one of the pioneer luminaries in this field - has predicted that they are still 15 years away (his prediction is that it won't be a real thing for real people any sooner than 2032).

So the rapid advance of neural networks - which were almost entirely ignored until the last 5 years - allows for a different approach. Instead of "designing" the vision system you give it lots of data and create a process that allows the vision system to "learn" what it needs to do. This has some downsides compared to explicitly designed systems. For one thing when it's not working you don't know why explicitly. Just like there isn't one neuron in a crazy person's brain causing the problem there isn't one line of code in a neural network that's responsible for why a particular sign wasn't recognized in a particular use case. The system's knowledge wasn't created by the designers and it isn't organized in ways that allow the designers to tease out the causes of particular behaviors. This 'black box' aspect of neural networks is a major challenge to people who work with them.

So why use neural networks if they have this really ugly flaw? In a word, it's because they scale well. If I need 50 designers for 5 years to design a system that works well in 1 use case and I have 10,000 use cases then I need something like 500,000 designers for 5 years to do all 10,000 use cases. Or more likely 50,000 people for 50 years. With neural networks the problem is data and computers per use case rather than people and years per use case. So I need 10,000x as much data rather than 10,000x as many people. And to the extent that this simplistic analogy is true this second example is feasible within 5 or 10 years and the first one is not.

In this manner of thinking about AP1 and AP2 Tesla Vision is more of a 'learned' system than it is a 'designed' system whereas the ME parts of AP1 are in the 'designed' category. And AP2 is still an immature 'learned' system at that - the training of it isn't yet properly sorted out. But the promise is that once they have the process for training the system worked out it will scale up to be able to handle the enormous variety of the real world much faster than a 'designed' system could scale up it's work force to deal with those thousands of use cases.

Ok, so this is a very roundabout answer to your question of why can't AP2 do simple things that AP1 could do years ago. And my grossly oversimplified response is that AP1 and AP2 are made different ways and those different methods have very different strengths and weaknesses. Tesla started over with a different approach because they need the ability to scale up the ability of AP without having to hire a vast army of people who don't even exist yet. Elon clearly believes that this tech is going to scale very rapidly once they have the formula worked out as his public pronouncements have consistently shown. 

And in the meantime there are situations that AP2 doesn't handle that AP1 does.

As an aside - I prefer the AP2 lane change over that of AP1. Maybe this is geography or a matter of taste rather that code? And as for the speed limit signs - I agree that reading signs is not a particularly hard problem. My guess there is that they decided not to rely on reading signs rather than that they can't do it. Maybe because of a focus on using map annotations instead, or perhaps there's some subtle failure mode that relying on speed signs can lead to.

I recently sat through a lecture by Waymo's head of development and he was describing all these crazy and kind of scary things that they run into. One example was about seeing an overhead sign reflected in the rear window glass of the car ahead. It only happens in rare situations but since both lidar and vision reflect off of glass both of those sensors see a big street sign lying in the road ahead of the car and their car wants to swerve or brake to avoid the 'sign' in the road. It's a really obscure but serious failure and it's much harder to deal with than it first seems. They get similar weird events when driving past glass fronted buildings and big shiny buses. Even standing water on the road can do crazy things in the right situation. They have all these cases that they have to carefully test for, write code to fix, and then go out and test again. Heuristic approaches like the one's that Waymo uses work perfectly when they are working but they are brittle - they fail spectacularly and suddenly and the designers have to compensate for that. They make it easy to be overconfident because you can't see the failure coming, which I'm sure is one of the things that led Chris Urmson to commit to nothing less than full level 5 - because ordinary users can't be relied upon to respect the limitations of a system that they don't experience until it's too late. 

I note that their initial test deployment service is going into Chandler AZ; a suburban development with few overhead signs, glass fronted buildings, or big shiny buses roaming the streets. And not a lot of standing water. I wonder if that's a coincidence.

Neural networks get wonky as they approach a failure point and if you use them much you'll find that you can see a failure coming. My sense of AP1 and AP2 mirrors this - AP1 gives me perfect confidence even in places where it might be driving right along the edge of a gross failure. That makes AP1 more 'comfortable' because it's hiding it's limitations, in a sense. AP2 conveys it's lack of confidence to me by getting wobbly or moving outside the perfect center of my comfort zone. So depending on what you expect that can make you not want to use it. I like it, but I understand why other people do not.

31

 

Share this post


Link to post
Share on other sites


46 minutes ago, proftournesol said:

I found this interesting post on the Tesla Motors Club forum on autonomous driving and the differing approaches taken my Mobileye (now Intel) and Google compared to Tesla.

AP1 = Tesla's original Autopilot system, designed by Mobileye (ME)

AP2 = Tesla's in-house designed neural network based Autopilot

It may help explain why Tesla appears to be behind on autonomous driving, but why Musk has decided to go down this path, he believes that it'll end up ahead.

 

A pure vision system is very, very hard to realise.

Share this post


Link to post
Share on other sites
31 minutes ago, rmpfyf said:

 

A pure vision system is very, very hard to realise.

Obviously, Elon and his design team believe that it is hard rather than very very hard

Share this post


Link to post
Share on other sites
2 hours ago, proftournesol said:

Obviously, Elon and his design team believe that it is hard rather than very very hard

 

No, Elon keeps hiring people that believe it and losing people that don't. 

 

There is not another manufacturer on the planet that believes it can be done with cameras only and not LIDAR. Particularly not at low light, under rain-degraded conditions, etc. So many AP2 failures concern poor decision making that'd have been avoided with sensor fusion w/LIDAR. 

 

All the talk over ME vs Tesla kinda negate the fact that ME made an entire software stack to deliver their system, and Tesla isn't trying with such trifling niceties. Tesla is trying to do something with Cameras that ME themselves see as possible. 

 

I don't really care whether it's 99, 99.9 or 99.9999% as good as a system doing sensor fusion with LIDAR. I'd prefer my family in the one with LIDAR. It'll likely pay itself off the 0.0001% of the time it's needed... which makes Tesla's approach inane.

 

I get that LIDAR looks silly and that no one wants a car that's wearing a funny hat, though that's about as far as that situation needs to go.

Share this post


Link to post
Share on other sites

Well, an Uber autonomous vehicle just killed a pedestrian in Tempe, AZ. We don't know much more - whether it was avoidable or not - but interesting times for the autonomous crowd.

 

Condolences to the deceased.

Share this post


Link to post
Share on other sites


On 3/20/2018 at 8:50 AM, rmpfyf said:

Well, an Uber autonomous vehicle just killed a pedestrian in Tempe, AZ. We don't know much more - whether it was avoidable or not - but interesting times for the autonomous crowd.

 

Condolences to the deceased.

One of the questions is whether the Volvo's systems (which were disabled) would have stopped in time rather than the Uber systems.

 

It becomes a question of technology implementation and how the general public is now part of the testing ground for the manufacturers.

Share this post


Link to post
Share on other sites

If the purpose of this technology is to save lives, perhaps there is more to be gained by putting blockchain on $100 notes so that drug money can't be laundered. 

 

Drug deaths are unnecessary.

Share this post


Link to post
Share on other sites

Firstly, we don't know the result of the investigation of this incident so any conclusions are premature.

Secondly, is the outcome we are expecting from autonomous driving realistic at this stage? If we are expecting zero fatalities from a new technology before it's allowed on the roads then we are unlikely to get any complex new technologies.

If autonomous driving, even at this stage of development, delivered an immediate 50% reduction in fatal incidents then would it be worthwhile or should we withdraw the technology?

Share this post


Link to post
Share on other sites

I see the biggest advantage to autonomous driving as helping elderly people in rural areas maintain a sense of freedom without risk to the general public. Too many grandmas weaving around on country roads at 80km/h posing a threat/obstacle to other road users. We shouldn't lock them up in a home, but they are a danger to themselves and others.

 

If an autonomous car maintains the specified road speed, keeps itself inside the lines, applies the brake or swerves obstacles in case of collision and wakes up the driver to put their hands on the wheel. That's enough for me.

 

More than once the auto braking in my car has saved me from a bumper repair bill, but I don't expect it to stop for pedestrians that jump in front of me. 

Share this post


Link to post
Share on other sites
1 hour ago, RockandorRoll said:

I see the biggest advantage to autonomous driving as helping elderly people in rural areas maintain a sense of freedom without risk to the general public. Too many grandmas weaving around on country roads at 80km/h posing a threat/obstacle to other road users. We shouldn't lock them up in a home, but they are a danger to themselves and others.

I'm sitting in one of my shops which is in a Square where the (mostly) elderly park in the carpark in the middle and shops surround the carpark. They never cause fatalities at these speeds, but gawd I hear lots of plastic getting crushed. Glad I don't park in that carpark, one of my staff has already been hit when delivering some stock. Autonomous cars would be a great help in such circumstances.

Share this post


Link to post
Share on other sites

An interesting piece by Ashlee Vance on Zoox

Quote

$800 Million Says a Self-Driving Car Looks Like This

More stories by Ashlee Vance

Zoox and Its $800 Million Robo Taxi Dream

$800 Million Says a Self-Driving Car Looks Like This

Your robot taxi has arrived, kind of.

From 

The mystery box sits inside an all-white room in an office building in San Francisco. It’s a large, wooden crate with no features other than the word “ZOOX” in big, black block letters and a sturdy-looking padlock. For about $100 million, you can get a key and have a look inside.

Few have had the pleasure. What they saw is a black, carlike robot about the size and shape of a Mini Cooper. Or actually, like the rear halves of two Mini Coopers welded together. The interior has no steering wheel or dashboard, just an open space with two bench seats facing each other. The whole mock-up looks like someone could punch a hole through it. But because you’ve just invested $100 million in the thing, you’ve earned the right to have a seat and enjoy a simulated city tour while you pray that this vision of a driverless future will come to pass.

Of the many self-driving car hopefuls, Zoox Inc. may be the most daring. The company’s robot taxi could be amazing or terrible. It might change the world—not in the contemporary Silicon Valley sense, but in a meaningful sense—or it might be an epic flop. At this point, it’s hard to tell how much of the sales pitch is real. Luckily for the company’s founders, there have been plenty of rich people excited to, as Hunter S. Thompson once put it, buy the ticket and take the ride.

2600x-1.jpg

The crate.

Photographer: Christie Hemm Klok for Bloomberg Businessweek

Zoox founders Tim Kentley-Klay and Jesse Levinson say everyone else involved in the race to build a self-driving car is doing it wrong. Instead of retro-fitting existing cars with fancy sensors and smart software, they want to make an autonomous vehicle from the ground up.

The one they’ve built is all-electric. It’s bidirectional so it can cruise into a parking spot traveling one way and cruise out the other. It makes noises to communicate with pedestrians. It has screens on the windows to issue custom welcome messages to passengers. If the founders prove correct, it will be the safest vehicle on the road, having replaced decades of conventions built around drivers with a type of protective cocoon for riders. And, of course, Zoox wants to run its own ride-hailing service.

Both founders sound quite serious as they argue that Zoox is obvious, almost inevitable. The world will eventually move to perfectly engineered robotic vehicles, so why waste time trying to incorporate self-driving technology into yesteryear’s cars? “We are a startup pitted against the biggest companies on the planet,” Kentley-Klay says. “But we believe deeply that what we’re building is the right thing. Creativity and technical elegance will win here.”

Kentley-Klay, it should be clear, is a salesman. “We want to transform our cities in the way that we live, breathe, and work with our families and communities that’s really profound,” he says, by way of explaining the company’s name. (It’s an abbreviation of zooxanthellae, the algae that helps fuel coral reef growth, not a nod to some colorful hallucination from Dr. Seuss.) Levinson, whose father, Arthur, ran Genentech Inc., chairs Apple Inc., and mentored Steve Jobs, comes from Silicon Valley royalty. Together, they’ve raised an impressive pile of venture capital: about $800 million to date, including $500 million in early July at a valuation of $3.2 billion.

Even with all that cash, Zoox will be lucky to make it to 2020, when it expects to put its first vehicles on the road. “It’s a huge bet,” Kentley-Klay concedes. In the next breath, though, he predicts the future for all of his competitors—Alphabet, General Motors, Tesla, Apple, Daimler, et al.—if the bet pays off: “They’re f---ed.”

1000x-1.jpg
Photographer: Christie Hemm Klok for Bloomberg Businessweek

Kentley-Klay is a 43-year-old native Australian with a linebacker’s physique, a mischievous manner, and a family history of gimme-the-damn-wheel adventurousness. His great-grandmother was the first Australian woman to get a driver’s license. His grandmother, the second Australian woman to get a pilot’s license, taught Kentley-Klay’s father, Peter, to fly during endurance air races between Sydney and London.

Young Tim was a tinkerer. Growing up in Melbourne, he tried to build a space shuttle out of spare parts from washing machines and lawnmowers, crafted a giant fiberglass whale to compete in a soapbox derby, and, until his parents found out, produced and sold fake IDs to schoolmates. In his 20s, he bought a decrepit 1958 Land Rover and turned it into a surfboard carrier he called the General. “It’s still his pride and joy,” says his mother, Robin.

After getting a degree in communication design, Kentley-Klay went into the ad business and became an industry-leading animator and video producer. He made ads for companies including Visa, McDonald’s, and Honda Motor, and his salesmanship improved with his design skills. “Every eight weeks, there was a new script,” he says. “You had to invent a new world with new characters and go through the really tough process of pitching an agency.”

In 2012, Kentley-Klay stumbled on a blog post about Google’s self-driving car project, then pretty much the only one in the field. He saw the company’s prototypes as unsightly half-measures, with their bulbous sensors mounted on some other company’s car like robot taxidermy. He started designing concepts, researching artificial intelligence, and, per the custom of would-be tech visionaries, wrote a manifesto. He also made videos depicting robo-taxied cities of tomorrow. Then, one day, he walked into his Melbourne office and announced he was off to America to fulfill his driverless dreams.

In a move that some will call devious and others will call ingenious, Kentley-Klay reached out to some of the biggest names in the field and told them he was making a documentary on the rise of self-driving cars. The plan was to mine these people for information and feel out potential partners. His first “interviewee” was Sterling Anderson, then a robotics researcher at MIT and later Tesla Inc.’s self-driving car chief. “I played the oldest trick in the director’s book: the vanity card,” Kentley-Klay says. “I showed up at MIT with a Canon and a bullshit microphone and interviewed Sterling for two hours in a grassy field. In my defense, I might have been making a documentary. The jury is still out on whether I am full of shit.”

Eventually, Kentley-Klay ended up in California and in front of Anthony Levandowski, then one of Google’s lead autonomous-vehicle engineers. They hit it off, and Levandowski was impressed enough to invite Kentley-Klay to give a presentation at the Googleplex in June 2013. On the appointed day, Kentley-Klay popped some dextroamphetamine and gave a talk to 20 people. “I said, ‘I’m Tim, and I will be the first to bring autonomous mobility to the world,’ ” he recalls. “It was a stupid f---ing thing to say, and I did not think it went very well.”

But the Google team was taken with Kentley-Klay’s chemistry-aided passion. They didn’t agree with all of his ideas, especially the bit about needing to build a whole new vehicle from scratch, but they were impressed by how much he’d thought things out. Rather remarkably, Google offered this oddball Australian nonengineer a job with the world’s leading autonomous vehicle team. “He had skills, and it’s good to have bright minds with opposing views around,” Levandowski says. Just as remarkably, Kentley-Klay said no. He didn’t think Google was radical enough.

Kentley-Klay returned to Australia. Several months went by. Contacts at Google and elsewhere stopped returning his emails. He began to think he’d made a terrible mistake. He says he saw a psychiatrist. Then he flew back to the U.S. in April 2014 and waited outside Levandowski’s house until the Google engineer got home one evening. They talked. Levandowski mentioned one guy Google really wanted to hire but could never get, a Stanford engineering grad student named Jesse Levinson.

1000x-1.jpg
Photographer: Christie Hemm Klok for Bloomberg Businessweek

Levinson, 35, is Kentley-Klay’s polar opposite. He’s thin, quiet, and picks his words carefully. He does his absolute best never to bring up his fabled Silicon Valley parentage. At Stanford, Levinson became the protégé of Sebastian Thrun, the professor who went on to lead Google’s self-driving car project. “Jesse has been one of my smartest students ever,” Thrun says.

While at Stanford, Levinson invented a new way to calibrate the sensors on self-driving cars. These types of vehicles typically rely on cameras and lasers to build a picture of the world around them. To fine-tune the imaging systems, engineers often hold up posters with checkerboard and target patterns as a baseline. In the field, though, the sensors can be difficult to reconfigure when out of whack. Levinson wrote software that made it possible to configure the sensors while driving, using objects in the real world to provide feedback instead of the test patterns. “The vehicle can figure out where its sensors are with superhuman levels of accuracy, down to 2 millimeters and 1/100th of a degree,” he says.

After Kentley-Klay tracked down Levinson, they agreed that the salesman’s vision and design skills could pair well with the engineer’s technical acumen. Both liked the idea of battling conventional thinking and building something they could call their own. “I could never tell what Google’s end goal with the technology really was,” Levinson says. “And I have a hard time motivating myself to work hard on something if I can’t see in my mind where it was going.”

Levinson didn’t buy in right away, however. First, he hired a private investigator to run a background check on his would-be partner. “I didn’t think he was a crazy person,” Levinson says. “I just didn’t know who he was, and he had an unusual background for someone starting a self-driving car company in Silicon Valley.” The only things that turned up were a couple of speeding tickets. “I didn’t know whether to take it as an insult or a compliment that he was taking me seriously,” Kentley-Klay says. “But they never found the body, so I passed the test.” Zoox incorporated on July 29, 2014.

Smack dab in the middle of Silicon Valley sits the SLAC National Accelerator Laboratory. The most distinguishing thing about the 426-acre compound is a 2-mile-long particle accelerator that cuts through the grassy hills of Menlo Park and onto the Stanford campus. It’s a high-security jewel of American nuclear physics. It also contains a series of winding, out-of-sight roads that are perfect for quietly testing autonomous vehicles. Somehow, Kentley-Klay persuaded someone there to let him use an old firehouse at the complex as Zoox’s first proper headquarters.

In early 2015, Zoox began hiring staff and retrofitting the firehouse into a prototyping facility. Engineers created skeletal versions of the robots while a software team worked on the contraption’s brain. In a distressing early sign for Zoox’s investors, Kentley-Klay also spent $16,000 on a Sub-Zero office refrigerator because he thought it looked cool.

From those first days, Zoox and its founders had a clear picture of the vehicle they wanted. It would have an identical front and rear, and would be easy to service on the rare occasions when it wore out its built-in redundant parts. Each wheel would have its own motor, so the vehicle could make precise maneuvers in tight spaces and park just about anywhere. And its array of sensors and cameras would be seamlessly integrated, not jammed onto an existing vehicle.

3400x-1.jpg

Three generations of self-driving vehicles (VH1, VH4, and VH5) at Zoox manufacturing headquarters in Foster City, Calif.

Photographer: Christie Hemm Klok for Bloomberg Businessweek

Lines of LEDs on the front and rear of the vehicle would send signals to other drivers, such as alerts that the robot taxi had spotted an obstruction up ahead on the road. Similarly, its directional sound system would let out a bleep or a blurp to tell a pedestrian in a crosswalk that the vehicle saw him, or to sound an alarm to the driver of a fast-approaching vehicle that he needed to get off his smartphone and hit the brakes to avoid a wreck. Early on the Zoox engineers considered a giant airbag that would envelop the vehicle before an accident; they ultimately went with more conventional airbags for the cabin. Zoox cars will come with high-end audio, plush seats, and some sort of conversational app for interacting with the riders.

The company has six prototypes, or mules, in auto industry lingo. They’re named VH1, VH2, and so on—the VH being short for “vaporware horseshit,” which is how a car blog once described the company’s technology. During a recent visit to SLAC, the mules were put into action with a series of demos. In one, the mule parked with extreme precision in a spot outside the firehouse. In another, it came to a controlled stop for a pedestrian making his way through a crosswalk and issued a bleep-bloop in greeting. In a separate demonstration at an abandoned airfield, the mules really showed off, tearing autonomously through an obstacle course at 50 mph. Your reporter, in crash helmet and safety harness, had the privilege to be the first person experiencing this test in the backward-facing seat.

1600x-1.jpg

A passerby photographs a Zoox self-driving car as it pulls out of the back lot at the San Francisco office and hits the road for a test drive.

Photographer: Christie Hemm Klok for Bloomberg Businessweek

The real proving ground for any self-driving car, though, is actual streets and highways where texters, road ragers, and the generally erratic roam. On a weekday in May, Kentley-Klay greets me in a parking lot behind the firehouse. A Toyota Highlander is parked about 100 feet away. The Zoox prototypes aren’t yet street legal, which means the company must rely on a fleet of Highlanders to train and test its sensors and software. The vehicles have cameras and lasers dangling off their sides and huge, humming computers in the rear storage area.

Kentley-Klay hands me an iPhone. I open the Zoox app and summon one of the Highlanders. We hop in and tell the car to head north toward Zoox’s new headquarters in Foster City, which is about 20 miles away, to drop off Kentley-Klay and pick up Levinson. From there, Levinson and I ride another 20 miles to San Francisco. Bay Area traffic being what it is, the whole journey takes about 90 minutes and is, well, pretty amazing.

Highways are easier for self-driving cars. Many makes and models now on the road use adaptive cruise control and other features that can follow another car on the freeway and maintain a safe distance. Prototypes from Waymo, Alphabet Inc.’s self-driving spin-out, can handle city streets, albeit in less densely populated areas such as Arizona. It’s really only Zoox and GM Cruise that have been willing to take outsiders on autonomous drives through a place as busy as San Francisco.

2600x-1.jpg

A robotic arm sculpts an entryway for the Foster City location.

Photographer: Christie Hemm Klok for Bloomberg Businessweek

The Zoox vehicle makes its way through the suburbs with ease, politely waiting its turn at four-way stops and giving cyclists plenty of room. When a black delivery truck unexpectedly whips across two lanes of traffic, the Highlander stops and avoids a collision. A few minutes later, we get on the freeway, and the Toyota merges in a manner that could be described as ultra-safe mode. Instead of jamming the accelerator to outrun oncoming traffic, it hugs the edge of the on-ramp as it waits for an opening.

It’s in the city, though, where Zoox really shines. The screens inside the vehicle show an overwhelming amount of information, as the computer vision software keeps tracks of cars, people, stoplights, and road markers all at the same time. Unlike many self-driving cars, it glides to stops. At an intersection with a left turn, it allows oncoming traffic to pass and then waits for some slow pedestrians. Overall, the vehicle performs so well that you forget no one is driving.

In May, Zoox moved most of its 500-plus employees into a new 130,000-square foot headquarters in Foster City that Kentley-Klay helped design. The facility is sleek and spare, with lots of glass and polished concrete. At its center is an all-white manufacturing hub, where workers will soon begin hand-building the first fleet of vehicles. A machine runs a Zoox drivetrain in place, simulating thousands of hours of driving to keep training the autonomous system. “It’s like a virtual reality suit for a car,” Kentley-Klay says. Nearby, a custom computer rig contains 1,000 ultra-high-end graphics chips that can each perform 40 trillion calculations per second on driving and artificial intelligence problems. “It might be the largest supercomputer of any startup,” Levinson says. In another part of the facility, there’s an enclosed area that only a handful of specially authorized people can enter. This is the top-secret zone where Zoox will put the finishing touches on things such as its vehicle’s industrial design, sound system, and branding. (Because he chairs Apple, Arthur Levinson hasn’t seen any of this. “It’s not that I don’t want to,” he says. “It’s just better to keep a little distance and read about Jesse in the newspaper.”)

Current and former employees say Kentley-Klay and Levinson have handled their first gigs as company chiefs well, for the most part. Zoox has managed to hire away hundreds of engineers from Tesla, Apple, Google, Ferrari, and Amazon.com, in large part because it offers a harder engineering challenge than anywhere else. The co-founders do have their controlling sides. Kentley-Klay is health-conscious, forbidding sodas in the office—even diet—and publicly shaming employees who send out “doughnuts in the kitchen” messages. Levinson takes pride in correcting grammar, to the point that employees proofread one another’s e-mails.

2600x-1.jpg

A self-driving vehicle in Zoox’s garage at SLAC in Palo Alto.

Photographer: Christie Hemm Klok for Bloomberg Businessweek

The pair have mastered the hyperbolic vernacular of the Silicon Valley startup scene. Text running around the wheel wells of the Zoox vehicles reads, “Infinity is enough,” a phrase the company has trademarked. Kentley-Klay’s own name is another invention. He was born Tim Kentley and adopted the Klay. “I have added Klay to my surname, as I find I just love making things and [sic] is a big part of me,” he wrote to the Zoox employees in 2013. “So, clay, or mud, is the primal aspect of this spirit, and ‘K’lay is a word play on that to keep the K in the mix of the evolving family name.”

There are plenty of people in the self-driving car business who regard Zoox as pure VH. Levandowski is one of them, even though he considers Levinson and Kentley-Klay friends and first-class minds. Levandowski still drives an old Lexus he bought from Art Levinson, and he designed a stockpicking system called FutureGame with Jesse.

“I don’t think making the car is the gangsta differentiation,” Levandowski says. “It’s overly complicated, and the wrong way to go.” Zoox, in essence, wants to beat Waymo at self-driving car technology, Tesla at electric vehicles, and Uber at ride-sharing. “If anything goes bad with one piece of that, you’re f---ed,” Levandowski says. “But they’ve heard that a billion times.”

Kentley-Klay and Levinson both readily admit that Zoox could end badly. Neither, however, appears to have any intention of altering course, and they vow to stun competitors and consumers with surprises yet to come. “For me, it’s not complicated,” Kentley-Klay says. “You have to think about what will give you the best result and then walk down that road, even if it means it’s the harder road to go down.”

25

 

Share this post


Link to post
Share on other sites

Suggesting TKK has a linebacker's physique is a bit of a stretch.

Share this post


Link to post
Share on other sites

An interesting take on Tesla's autonomous vehicle development from a software engineer on CleanTechnica

Quote

Tesla Autopilot Improvements Are *Insane* — Technically Speaking

August 2nd, 2018 by  


Editor’s note: Paul Fosse is the kind gentleman who eagerly drove his brand new Tesla Model 3 from Tampa to Sarasota, Florida, to let me test drive it. I later found out he was a software engineer with three decades of experience and today realized he could provide great context and insight into software and hardware matters related to Tesla, Waymo, and others. His debut article is below, covering the insane Tesla Autopilot news coming out of Tesla’s quarterly conference call. Enjoy!


Elon-Musk-smiling-0-270x159.jpgTo kick things off, Tesla CEO Elon Musk gave an overview of Tesla’s recent and not so recent advancements in Autopilot. The soft-spoken, technical, and brief statements could potentially be glossed over as a simple update to Tesla’s Autopilot suite — some perennial Tesla critics may even want to call it spin. However, those of us who follow this kind of stuff had our eyes nearly popping out of our skulls.

Here’s a bullet-point overview of Elon’s core statements:

¤  New hardware is a plug-and-play option that can be put into existing cars.

¤  It has 10 times faster performance than what was used before.

¤  Tesla has been working for 3 years on this.

¤  Tesla has been using this in stealth operation going up to today.

“I think we’re making pretty radical advances in the core software technology and the division beyond that,” he summarized.

Stuart Bowers, a fresh new face at Tesla who is now VP of Engineering, got a warm welcome from Elon and provided an update on Autopilot software. He noted that their current focus is finalizing version 9 (v9), which will roll out in approximately a month or to early users/testers and then probably hit the crowds in September. Key features of this version of Autopilot include:

¤  It will start to work with navigation.

¤  It will start to handle exits and decide if to take them by using navigation.

¤  Handing control to and from the driver

¤  Figuring out what lane you’re in and what lane to be in.

¤  Handling on-ramps and off-ramps.

¤  Changing lanes for you.

¤  Include some new (undisclosed) safety features, enabled by better image processing.

“We’re also kind of digging in on some new safety features,” he stated. “I think probably the thing which is most exciting for me [that is] coming from the team is just seeing the foundation that’s been built out over the last two years. I think Andrej will talk a lot about some of the perception and vision work we’ve done there with the data engine. That has sort of allowed us to build on top of that very, very quickly and I think we’re all starting to see a new set of safety features that really only make sense in this world — we have extremely high understanding of what’s happening around the vehicle.”

Andrej-Karpathy-270x270.jpgAndrej Karpathy, Tesla’s fairly new Director of AI, joined the call as well. Andrej gave a short overview of his background and chimed in with a list of highlights from the vision team he leads.

¤  He has worked with neural networks for about 10 years, first as a PhD student at Stanford and later as a research scientist at OpenAI.

¤  The vision team is responsible for processing the video streams from the cameras in the vehicle into an understanding of what is around us.

¤  He’s particularly excited about “building out this infrastructure for computer vision that underlies all the neural network training, trying to get those networks to work extremely well, and make that a really good foundation on top of which we build out all the features of the Autopilot like the features associated with the v9 release that’s going to come up and that Stuart as mentioned.”

Pete Bannon, who is currently the head of Autopilot hardware version 3 development, was the third Autopilot leader to join the call. He was hired from Apple 2½ years ago. Notes from his segment are quite long, with them coming sometimes from him and sometimes from Elon. Here are the notes:

¤  V3 hardware chips are up and working and Tesla has “drop-in replacements” for Model S, Model X, and Model 3 — all of which have been driven in the field.

¤  It support neural nets with full frame rates and lots of idle cycles to spare.

¤  Pete’s very excited about what Andrej’s software team will be able to do with all that extra power.

¤  He gave a presentation to Andrej’s team last month explaining how it worked and what it was capable of. A very excited team member said top AI developers will want to come to work at Tesla just to get access to this hardware.

A little more on Pete’s background: He was at Digital Equipment in 1984, then was an Intel Fellow working on Itanium and led the design of the first ARM32 processor for the iPhone 5. He also led the design team for the world’s first ARM64 processor, which went into the iPhone 5s. He had been working on performance modeling and improvements for 8 years at Apple before coming to Tesla. In two years at Tesla he architected the version 3 hardware for Tesla’s cars. It will be coming to cars next year.

Here’s the stunner: It’s 10 times faster than anything else in the world at running neural nets! It has made a jump from 200 frames a second to 2,000 frames a second — “and with full redundancy and fail-over.”

tesla-autopilot-full-self-driving-demo4-Not good enough for you? Here’s more: “And it costs the same as our current hardware and we anticipate that this would have to be replaced, this replacement, which is why I made it easy to switch out the computer, and that’s all that needs to be done. If we take out one computer, plug in the next. That’s it. All the connectors are compatible and you get an order of magnitude more processing and you can run all the cameras at primary full resolution with the complex neural net. So it’s super kickass.”

They did a survey of what everyone else was doing and whether they had a CPU or a GPU to speed up neural networks, but nobody was doing a bottom-up design from scratch to run neural nets, which is what they decided they should do. [Editor’s side note: Think any other automakers are working on such improvements?]

Here’s more from Pete summarizing the improvements: “it’s a huge number of very simple complications with the memory needed to store the results of those complications right next to the circuits that are doing the matrix calculations. And the net effect is an order of magnitude improvement in the frames per second. Our current hardware, which — I’m a big fan of NVIDIA, they do great stuff, but using a GPU … fundamentally, it’s an emulation mode, and then you also get choked on the bus. So, the transfer between the GPU and the CPU ends up being one that constrains the system. …

“We’ve been in, like, semi-stealth mode basically for the last two to three years on this, but I think it’s probably time to let the cat out of the bag because the cat’s going to come out of the bag anyway.”

Tesla had the benefit of seeing what Tesla’s neural networks looked like 2–3 years ago and what their projections were for the future. They leveraged that knowledge and the ability to totally commit to that style of computing — without any concern for other types of computing (which handicaps the ability of other designers to make radical changes).

Notably, this hardware runs the net on the bare chip. It is reportedly the world’s most advanced computer designed for autonomous driving.

My Own Analysis, Opinion, & Speculation

The biggest surprise of the Tesla conference call was when Elon “let the cat out of the bag” on his stealth custom chip, which I’ll repeat was designed to process neural networks at 10 times the old hardware (which was considered state of this art until today).

Computer hardware has advanced according to Moore’s Law for about 55 years. Transistor count and performance grow at about 100% every 2 years, so when someone announces a 10 fold increase in performance in one generation, this is a real surprise!

Tesla didn’t do this with better semiconductor technology, but by using transistors that allow for more efficiently processing the type of instructions that Tesla needs to run to process the video from Tesla’s 8 high definition cameras. It needs to understand what is around the car to best decide how to drive the car. Other companies are betting on lidar to do this, but Elon strongly believes that using computer vision is the way to solve this problem. Elon’s credibility on this topic presumably got a big boost today with the announcement that they can now process 2,000 frames of video per second with the new hardware, which dwarfs the previously “state of the art” 200 frames per second that was possible with the V2 hardware that ships in all of Tesla’s cars today.

What they didn’t seem to realize — or at least didn’t acknowledge — is that this hardware that processes neural nets 10 times faster and more efficiently than anything else in the world has applications far beyond autonomous driving. This could be similar to how Amazon started selling books and other goods from a website but became so good at maintaining an infrastructure for its site that it decided to sell cloud services to others. Now, Amazon’s cloud infrastructure business may be bigger than its website retail business (I’m not sure). Similarly, Tesla could sell this chip into other industries that have nothing to do with transportation or energy, but that need to run neural networks more quickly to solve their business problems.

It seems Tesla’s valuation should go up significantly from this news, but we’ll see.


Support CleanTechnica’s work by becoming a Member, Supporter, or Ambassador.
Or you can buy a cool t-shirt, cup, baby outfit, bag, or hoodie or make a one-time donation on PayPal.

 

Tags: AI, Andrej Karpathy, Apple, ARM32, ARM64, Elon Musk, neural networks, NVIDIA, Pete Bannon, Snap, Stanford, Stuart Bowers, Tesla, Tesla autopilot

About the Author

Paul Fosse I've been a software engineer for over 30 years, first working on EDI software and more recently developing data warehouse systems in the telecommunications and healthcare industry. Along the way, I've also had the chance to help start a software consulting firm and do portfolio management for several investment trusts. In 2010, I took an interest in electric cars because gas was getting expensive. In 2015, I started reading CleanTechnica and took an interest in solar, mainly because it was a threat to my oil and gas investments in my investment trusts

13

 

Share this post


Link to post
Share on other sites

Obviously the dude isn't aware of Google's been working on for, um, considerably longer.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Sign in to follow this  

  • Classifieds Statistics


    Currently Active Ads

    Total Sales (Since 2018)

    Total Sales Value (Last 14 Days)

    Total Ads Value (Since March 2020)
  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...