How AI like ChatGPT can build trust in autonomous vehicles

In recent months, the artificial intelligence [AI] chatbot known as ChatGPT has taken the world by storm. Developed by OpenAI and launched in November 2022, the technology has grabbed headlines for its ability to hold human-like conversations, answer complex philosophical questions, and automatically carry out tasks like composing emails, essays, and code.

How ChatGPT is normalising AI in the public consciousness

ChatGPT represents a significant step forward for the use of AI in a broad range of industries, as well as in everyday life. Despite being in its infancy, the chatbot has already shown itself to be incredibly sophisticated and has sparked much debate about the potential applications for generative AI in the future.

For example, ChatGPT and similar technologies could lead to better content personalisation for websites and apps, thereby enhancing customer experiences, and improve the speed and accuracy of financial transactions. What’s more, it even holds the potential to provide medical professionals with information on the latest research and treatments, which could prove vital in saving patients’ lives.

For automakers keen to expand their autonomous vehicle operations, the normalisation of AI like ChatGPT in the public consciousness is highly advantageous. Not only is the autonomous car market expected to reach $196.97 billion by 2030, but the mass rollout of AI-powered driverless cars represents an opportunity to make vehicles considerably safer for passengers and pedestrians alike. The growth of this market, however, is contingent on automakers’ ability to establish trust in AI among their consumers.

The challenges for automakers in championing autonomous vehicles

Historically, however, automakers have struggled to build engaging voice driven experiences, with the industry littered with examples of poorly performing voice activated solutions.

The opportunity with solutions like ChatGPT takes this challenge one step further as they need to build trust in the autonomous future they envision. This challenge is enhanced by the fact that AI has long been something of a nebulous concept to most consumers; the stuff of science fiction movies, rather than a technology they would find in their own cars – or expectations of what if can do have been set very high by the TV studios – anyone remember KIT?

With AI still very much in its early stages of development, many people who have gone through most of their lives without this technology are understandably sceptical of it. This is especially true of those from older generations.

A recent poll found that 62% of millennials – those born between 1981 and 1996 – believe that AI will have a positive impact on society. However, only 38% of baby boomers – those born between 1946 and 1964 – reported feeling the same way. It is clear, therefore, that there is a significant generational divide when it comes to trusting AI. This accentuates the problem that automakers face in rolling out driverless cars. Although manufacturers have recently started to achieve a certain degree of success with driver-assist vehicles, the same cannot yet be said for their self-driving counterparts. While modern drivers want their car to be functional, comfortable, safe, and provide additional infotainment functionality – often, ironically, through AI – many do not seem ready to give total control to an AI pilot.

This is because, on a psychological level, placing our lives in the hands of a machine is an alien concept. You only need to consider the various other modes of transport that we are happy to travel in – such as planes and trains – despite not having control. The reason we feel comfortable doing so is the human element; the knowledge that another, highly trained person is in control, even though we aren’t.

Driving is one of the most personal and dangerous activities that many of us will carry out on a daily basis. Not having direct control over this, therefore, instinctively makes us feel less safe, and the thought of riding in a driverless car potentially becomes far less appealing as a result.

Rethinking control: The safety advantages of AI pilots

While human nature may tell us that the roads are safer when we are in control, evidence suggests that the opposite is actually true. Over 30,000 people are killed in car crashes each year in the United States and, in 90% of cases, human error is at fault. When you consider all the ways a driver can become distracted – eating, drinking, messaging on their phone, or even being under the influence – it’s not difficult to imagine how easily a crash can occur. And it isn’t enough for drivers to assume that they are safe simply because they have personal responsibility behind the wheel, given they have little to no control over the actions of other road users.

When it comes to self-driving cars, however, the consensus among most experts is that we are far safer in the hands of technology than we are in those of human drivers. Indeed, it is estimated that driverless vehicles could save up to 1.5 million lives in the United States alone, and almost 50 million lives worldwide over the next 50 years.

The reason for this is because the concept behind self-driving cars largely removes the threat of human error, instead placing trust in highly sophisticated, rigorously tested algorithms designed to improve passenger safety. Given how much safer AI-powered driverless cars are than the vast majority of human drivers, it is somewhat ironic that so many people remain sceptical of the technology. However, as AI like ChatGPT continues to grab the attention of consumers the world over, public trust in vehicular AI will likely grow to a point where automakers can feel confident in receiving a significant return on their investment in the technology.

Technologies like ChatGPT also provide a platform for a more natural feedback and interaction between the vehicle occupants and the vehicle. This could be a key turning point in changing the hearts and minds across all age groups.

The need for enhanced security as vehicles gain more autonomy

Even though driverless cars are considerably safer than those piloted by humans, the growth of in-vehicle AI will inevitably lead to greater concerns around connected cars’ security. With AI gathering considerable amounts of personal user data, a vehicle’s system is going to hold lots of information on the driver, which, if it fell into the wrong hands, could prove very harmful.

For example, a hacker could gain access to a car’s on-board diagnostics [OBD] portal or other unprotected interfaces, which it could then use to effectively hijack the vehicle and send it on a joy ride that passengers have no control over. This would, of course, be totally counterproductive to what automakers are trying to achieve through autonomous vehicles – namely, greater standards of safety.

On top of this, widespread cybersecurity breaches reported by consumers would likely erode trust in AI, and also damage the manufacturer’s reputation. It is clear, therefore, that as AI becomes more prevalent within vehicles, the cybersecurity measures in place must be made as robust as possible to respond to the evolving threats.

Championing vehicle security in an autonomous future

Thankfully, however, it seems that automakers are increasingly coming together to collectively tackle the issue of trust and security in AI connected vehicles. While automakers are, of course, market competitors, security is certainly an area where many manufacturers seem happy to share their knowledge and experience with their industry peers.

After all, in a highly autonomous world, having the most secure vehicle on the market is a great boast, but if the vehicle in the next lane hasn’t been properly secured, the safety of all the cars on the road becomes compromised. This only serves to highlight that the challenge of cybersecurity is one that automakers cannot hope to tackle by themselves.

By coming together to address the issue, automakers can create workable, future-proof solutions that will enable the autonomous market to thrive. This may lead, for example, to the growth of post-quantum cryptography [PQC] within the industry. This is a new technology that is set to eventually replace the public-key cryptographic system currently used in the quantum computing environment. In addition to a wide range of applications in software-centric industries – including finance, telecommunications, data, and application security services – the use of PQC in vehicle cybersecurity is highly beneficial.

By implementing this technology, automakers can strengthen the security of their connected cars, thereby building consumer trust and adoption of such vehicles. While PQC is just one of the various solutions that manufacturers can implement to address the challenges they face, they must evidently work together if they hope to be successful in their efforts.

Our commitment to bolstering vehicle security

With autonomous vehicles set to form a large part of the automotive industry’s future, the threat of attack is set to grow exponentially over time. By moving to Trustonic’s Trusted Execution Environment [TEE] approach when protecting the various systems within their connected vehicles, automakers can be assured of greater flexibility across their entire cybersecurity architecture.

Through the use of a TEE, manufacturers have the option to add additional layers of protection to the solutions they have traditionally used for hardware backed security, such as hardware security modules [HSM]. It also provides a path to reduce the lifetime costs for keeping security up to date, while simultaneously increasing the range of supported security applications within their vehicles.

We stand ready to support automakers in their journey to growing their autonomous vehicle operations and, in turn, improving both vehicle safety and cybersecurity architecture.

Get in touch

Contact us to find out more

Please leave us a message and
our team will get back to you.

Oops! We could not locate your form.

Loading