Most importantly, Ford said it would be developing a lot of the core technology around these products in-house in order to reduce costs and retain greater control over. Mind you, the company isn’t creating its own large-language models or designing its own silicon, like Tesla and Rivian. Instead, it will be building its own electronic and computer modules that are smaller and more efficient than the systems that are currently in place.
“By designing our own software and hardware in-house, we’ve found a way to make this technology more affordable,” Ford’s chief officer for EVs and software Doug Field wrote in a blog post. “This means we can put advanced hands-free driving into the vehicles people actually buy, not just vehicles with unattainable price points.”
Amid all this, the company is trying to stake out a middle ground on AI: not going all-in on a robot army like Tesla and Hyundai, while still committing to some AI-powered products, like voice assistants and automated driving features.
Ford said its AI assistant will launch on the Ford and Lincoln mobile apps in 2026, before expanding to the in-car experience in 2027. An example would be a Ford owner standing in a hardware, unsure how many bags of mulch will fit in the bed of their truck. The owner could snap a photo of the mulch and ask the assistant, who could respond with a more accurate answer than, say, ChatGPT or Google’s Gemini, because it all the information about the owner’s vehicle, including truck bed size and trim level.
At a recent tech conference, Ford’s CFO Sherry House said Ford would be integrating Google’s Gemini into its vehicles. That said, the automaker is designing its assistant to be chatbot-agnostic, meaning it will work with a variety of different LLMs.
“The key part is that we take this LLM, and then we give it access to all the relevant Ford systems so that LLM then knows about what specific vehicle you’re using,” Sammy Omari, Ford’s head of ADAS and infotainment, told me.
Autonomous driving features will come later with the launch of Ford’s Universal EV Platform. Ford’s flagship product right now is BlueCruise, its hands-free Level 2 driver assist feature that’s only available on most highways. Ford plans on rolling out a point-to-point hands-free system that can recognize traffic lights and navigate intersections. And then eventually it will launch a Level 3 system where the driver still needs to be able to take over the vehicle upon request but can also take their eyes off the road in certain situations. (Some experts have argued that L3 systems can be dangerous given the need for drivers to stay attentive despite the vehicle performing most of the driving tasks.)
Omari explained that by rigorously scrutinizing every sensor, software component, and compute unit, the team has achieved a system that is approximately 30 percent lower cost than today’s hands-free system, while delivering significantly more capability.
All of this will depend on a “radical rethink” of Ford’s computing architecture, Field said in the blog post. That means a more unified “brain” that can process infotainment, ADAS, voice commands, and more.
For almost a decade, Ford has been building a team with the relevant expertise to spearhead these projects. The former Argo AI team, originally focused on Level 4 robotaxi development, was brought on board the mothership for their expertise in machine learning, robotics, and software. And a team of BlackBerry engineers, who were initially hired in 2017, is now working on building next-generation electronic modules to enable some of these innovations, Paul Costa, executive director of Ford’s electronics platforms, told me.
But Ford doesn’t want to get into “a TOPS arms race,” Costa added, referring to the metric for measuring AI processor’s speed in trillions of operations a second. Other companies, like Tesla and Rivian, have stressed the processing speed of their AI chips to prove how powerful their automated driving systems will be. Ford’s not interested in playing that game.
Rather than optimizing for performance alone, they pursued a balance of performance, cost, and size. The result is a compute module that is significantly more powerful, lower in cost, and 44 percent smaller than the system it replaces.
“We’re not just choosing one area here to optimize around at the expense of everything else,” Costa said. “We’ve actually been able to optimize across the board, and that’s why we’re so excited about it.”
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
Editorial Context & Insight
Original analysis & verification
Methodology
This article includes original analysis and synthesis from our editorial team, cross-referenced with primary sources to ensure depth and accuracy.
Primary Source
The Verge
