AI regulations are coming. What a business using AI needs to know
So far, all companies that either process or store (sensitive) data must follow the rules of GDPR. Since data is what makes AI models work, this is to date the main regulation many companies follow. Since August 2024, there is a new regulation in the EU/UK: the AI Act. ISO 42001 has been published recently.
What is going to change?
There will be 4 tiers of 'risk' on AI models based on the danger to human life:
- Tier 1 (Unacceptable risk): An AI solution that is linked to weapons. This is clearly Unacceptable risk. Not Allowed.
- Tier 2 (High risk): A model that scores students' exams or predicts health diseases or scores personal credit risk. This is classified as 'High risk' since the person's life is 'highly' affected: if the model gives a student low grades it may lead to personal career consequences.
- Tier 3 (Low risk): A model that interacts with clients in customer service, plays music under voice command. This is 'Low risk' as it is unlikely to affect a person's life: if the chatbot did not refund you a damaged glass you can live with it.
- Tier 4 (No risk): A model that regulates light luminosity or scores/generates paintings. No risk for humans.
Model Governance
All companies providing or using AI models in Tiers 2 and 3 will need to have a 'model' governance in place. What does it mean? We do not know exactly (regulations are not published yet) but they will likely need quality and safety checks and procedures similar to the approach in place now for cyber security.
Example
As in cyber security, you would need to periodically 'pen test' the model to ensure it is still correct and apply the latest security standards (i.e., quality of data pipeline) to minimize risks of unwanted results.
Back to blog