Can AI be hacked ? Autonomous vehicles study

Autonomous vehicles

Self driving cars or Autonomous vehicles are one of the biggest applications of AI and which are touted to slowly remove human beings from the driving seat over a period of time. These cars will be designed to do everything a human being can do by sensing their environment and taking intelligent decisions powered by AI.

How do Autonomous vehicles get hacked ?

A recent report by European Union Agency for Cybersecurity (ENISA) took a look at the cybersecurity risks of autonomous vehicles and what can be done to mitigate them.

  • Compromising the AI supply chain : The hardware and software that is used in the AI is at risk of being contaminated by cyber-criminals similar to the earlier supply chain attacks I mentioned. A lot of AI models are pre-trained and then imported into an organizations AI eco-system and attackers can potentially contaminate them as a back-door for the future.
  • Evading the model via physical means : By modifying the environment slightly cyber-criminals can potentially “trick” the model . For example paining over a stop sign and adding graffiti to the road which can lead the AI based system to make wrong decisions.
  • Evading the model via adversarial inputs: Adversarial examples are a technique which attackers due to evade machine learning models especially where computer vision is involved. By slightly manipulating the input to an AI system the attacker can produce a completely different output. This carefully crafted “noise” to an image which will be unnoticeable to human beings can lead to AI completely reclassifying a particular input as different objects which can cause serious problems . The below is an example where an image recognition system classifies a school bus as a Guacamole !

Security recommendations

A lot of the recommendations given the report are standard for any company such as conducting risk assessments and making sure that security by design is a necessary part of the AI present in these machines. Some of the key recommendations of the report that are unique to AI are below :

  • Periodic evaluations of AI models and data that is fed into the model to ensure it has not been changed or altered
  • Thorough vetting of the supply chain including third party providers to ensure there is no weak link in the chain
  • Increasing knowledge of AI cyber-security across developers and professionals as that is a major obstacle and a cause for risks being introduced.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Taimur Ijlal

Taimur Ijlal

☁️ Cloud Security Pro | 👨‍💻️ A.I. Noob | ✍️ Writer | 🇬🇧 UK Global Talent VISA holder. Check out my courses here ->