• Training the model should (or can) be done on anonymized data anyway, so there is no problem with this part.
    Then asking for results from ML or AI can usually be done with anonymized data too. The "knowledge" of ML is definitely biased on data inputs, but not biased on the specific (named) person I could query.

    Effectively this regulation is pushing towards the right direction, but can be a "heavy load" sometimes, too. As soon as I throw data anonymization into ML processing, I should be "GDPR-safe".