With the GDPR now being enforced in the European Union, there are plenty of companies that are getting concerned about the potential fines from regulatory authorities if they aren't complying with the law, or at least, making an attempt. There certainly is leeway for regulators to adjust fines or give warnings if a company is making efforts to comply. This has likely contributed to the work inside many organizations to move towards compliance.
There are likely some companies that might not worry, since there are relatively few regulatory employees and many companies. There are lots of complaints coming in, which could easily overwhelms the relatively small staff in each EU country. Complaints might not be investigated in a timely manner or even lost because of the workload. The problem will likely get worse as more consumers complain about data processing practices. I don't expect regulatory authority staffing to increase, so I'm sure only the most aggregious or complained about companies will get caught.
There is one way to help amplify the capabilities of the relatively small staffs reviewing complaints. There are researchers in the EU Institute in Florence that are are working with consumer organizations to create AI programs that can help by performing some of the work. The initial thrust is to evaluate privacy policies of companies. If there are issues, the software doesn't assess a fine, but it does alert a human to perform additional checks.
In one sense, this is exactly what computers can do well. They amplify the capabilities of humans by doing a piece of the work. We can build systems, whether traditional programmed ones or AI based applications, that handle a piece of the work that requires lots of human labor. Once initial evaluations are made, a human can review the work and make more refined judgments.
The danger, to me, is that humans will be lazy. They'll start to trust the AI systems as authorities and use less of their own judgment, mostly because it's just easier. I could see these systems evolve over time to actually train humans involuntarily. New employees would initially trust the AI results, learning from the AI rather than teaching it and constantly evaluating its effectiveness.
I think AI can really help improve the way that we accomplish work in many ways, but it should be regularly audited and approached with some skepticism. There certainly needs to be some sort of supervisory group overseeing the program that isn't involved in the outcome. We should be sure that the goals and results from any AI system continue to be focused on what we want to achieve, and that we transparently define those goals for anyone impacted. Otherwise we might end up having AIs evolve in ways that are counter to the original purpose.