Security Risks of Continual Learning in AI systems

Continual Learning is a threat vector that attackers can take advantage.

Taimur Ijlal

--

Photo by Markus Spiske on Unsplash

As human beings, we are constantly bombarded with data from all sides.

Our senses work overtime to interpret and make us understand what we see, hear, feel, etc.

Machine Learning copies this human ability through continual learning (CL), in which a model is deployed and continually learns from data in a production setting.

Just like the human brain, the model attempts to understand this continuous stream of data and rapidly adapts and re-learns in a Continual learning mode.

Given how much data is generated every minute, continual learning is necessary to keep models re-trained and updated.

The risks of Continual Learning

Machine Learning models are usually checked quite thoroughly when in development and during deployment time.

The data on which they are trained is checked to be of the highest quality so that their decision-making ability is optimal.

However, what happens when the model goes into production and starts the continual learning process?

What if attackers were able to…

--

--

Taimur Ijlal
Taimur Ijlal

Written by Taimur Ijlal

🔒 Cybersecurity Career Coach & Mentor | 🚀 Helping Professionals Land High-Paying Cybersecurity Jobs | Free Ebook -> https://cloudsecguy.gumroad.com/l/passive

No responses yet