
Navigating AI regulation on both sides of the Atlantic
The EU and US take vastly different approaches to AI governance. In our Data Insiders podcast, Petja Piilola, Blic Public Affairs) and Benjamin Wallace, Tietoevry Create discuss the paths forward.
Listen to the podcast on SpotifyDeveloping Artificial Intelligence is the space race of our times. The technology leaders of the future will likely be defined by how well they have managed to develop and adapt to our new, AI-powered societies.
Recently, the United States has been easing regulations to enhance its innovation capabilities. Europe’s approach, on the other hand, is defined by seeking to govern AI’s societal and security risks – the results of this work can be seen in the EU AI Act, the world’s first comprehensive AI legislation.
For companies, navigating the legislative hurdles poses significant challenges, which can slow down or even halt development. What is the best way to keep the train moving?
In this episode of Data Insiders podcast, our host Oona Ylänkö discusses the topic with two experts who navigate these complexities daily:
A partner at public affairs consultancy Blic Public Affairs, Petja Piilola works at the forefront of EU AI legislation in Brussels.
Benjamin Wallace – Manager, Architecture and Security Americas at Tietoevry Create – specializes in security practices for highly regulated industries, helping companies manage AI-related compliance risks.
Together, they tackle such themes as the charged geopolitics of AI dominance, the challenges of regulating a rapidly evolving technology, and what businesses can do to navigate the landscape that changes faster than any laws can keep up with.
Petja Piilola, Blic Public Affairs, works at the forefront of EU technology legislation in Brussels.
All AI legislation has global consequences
The EU’s new AI Act is making waves across the globe. From ethics and safety perspective, such framework has been long overdue, but it is not without issues. Piilola sees it as long, threatening and in parts vague.
“Everywhere I go I hear that companies feel regulation is the number one impediment of innovation, especially with AI. The AI Act itself is not the most restrictive law we have in the EU. The real problems are the GDPR and the Medical Devices Regulation MDR which are very restrictive in the AI use cases”, Piilola reveals.
The trepidation caused by the EU legislation is not limited to European companies. Even though the US has been gearing towards deregulating AI, the effects of the EU legislation can be felt there too – despite the Trump administration recently signing an executive order to further eliminate barriers to AI development.
“Most of the larger American organizations are interested in working on a global level. Even if there is deregulation on the American side of things, they're still going to have to consider the EU AI Act as their path to the global product base”, Wallace suggests.
A race against time
The EU’s approach to AI regulation was initially driven not only by concerns over its risks but also by the belief that a cautious strategy might be economically safer, given the uncertainty surrounding AI’s long-term impact.
“The initial idea was to become the leaders in regulation so that we would also become leaders in AI. The most responsible AI would come from Europe, so everyone would want our products and eventually, also adapt to our legislation. Unfortunately, that’s not what happened”, Piilola explains.
The issue is well recognized within the European Union too: many leaders are now advocating for more lenience in regulation. Additionally, the EU is attempting to take a more active role in encouraging innovation through funding programmes, initiatives and collaborations.
While Piilola thinks this is a step in the right direction, he remains sceptical of the practicalities. The structure of the EU is that of a legislative institution – changing the way it works is a tall order. At the very least, it’s not something that can change quickly.
This also highlights a broader issue in AI regulation: the time it takes to pass laws and make major administrative changes anywhere in the world. The AI Act alone took multiple years to create, and during that time we’ve already seen massive game-changers as ChatGPT.
Wallace draws a parallel to the US with HIPAA, a federal law designed to protect health information. Written loosely to accommodate to the impossible task of keeping up with the evolving technology, it later had to be complemented by second and third-party compliance standards such as FedRAMP (a security assessment standard for cloud producers) and HITRUST (a risk management framework for multiple compliance standards).
“If there’s a new headline every other day, how could we legislate for that?” Wallace asks.
Benjamin Wallace, Tietoevry Create, helps organizations manage AI-related risks.
What can companies do?
Taking all this into account, it is no wonder that for organizations looking to enter the AI market, this constantly evolving web of compliance laws causes serious concerns. The worst-case scenarios can range from hefty fines to wasting years' worth of development budget.
How should companies approach this? The answer depends on the nature of the project, but Piilola thinks that especially companies with lower-risk products should move forward more boldly than they currently are.
“Applications that can endanger people’s lives are of course a serious matter. But with low-hanging use cases where the risks are mostly regulatory, we need to be more courageous."
Both guests agree that taking the plunge requires not only commitment from leadership, but most likely a self-made governance model.
An example of what this can look like is Tietoevry Create's ESSR (Ethics, Safety, Security and Regulatory Compliance) Framework – a project Wallace has been heavily involved with. Developed together with a community of AI and risk experts, it helps evaluate individual AI use cases through a collaboration-focused approach that trains teams to understand and operate safely within regulatory compliance and risks.
“You can self-regulate instead of waiting for the official word to start doing things. And for smaller companies, even an off-the-shelf AI governance models can take you a long way”, Piilola says.
Finally, navigating AI legislation is best done with a specialized expert. Since most organizations don’t have such resources in-house, seeking external guidance is highly recommended.
“Find someone who has experience in existing regulations, security and ethical use of the technology and let them help guide you in how to use it in your organisation. The other half is that you must document all you do, making sure you can prove to have done as much due diligence as possible,” Wallace advices.
Interested in learning more about this topic? Listen to the full conversation on our Data Insiders podcast below!

Data changes the world – but does your company take full advantage of it? Data Insiders is a podcast where we seek answers to one question: how can data help us all do better business? The podcast addresses the trends and phenomena around this hot topic in an understandable and interesting way. Together with our guests, we share knowledge, offer collegial support and reveal the truth behind hype and buzzwords.

Benjamin is passionate about enhancing organizational resilience and fostering ethical governance through robust security practices. With over a decade of experience, he has designed and implemented multiple security frameworks aligned with industry standards like HITRUST, HIPAA, and FedRAMP.
At Tietoevry Create, Benjamin leads transformative security initiatives, guiding teams to elevate the maturity of security operations and helping clients confidently navigate complex compliance challenges.
Read more insights from Data Insiders
-
Data / Benjamin Wallace / 10.1.2025
What goes on in the black box: A path to responsible AI
Explore Tietoevry Create’s Responsible AI Framework for developing safe, ethical, and compliant AI applications.
-
Data / Data Insiders / 17.12.2024
Can designers help us build more responsible AI?
In our Data Insiders podcast, YLE’s Minna Mustakallio and Tietoevry Create’s Denny Royal suggest we need more human-centric approaches to tackle AI’s most pressing ethical issues.
-
Data / Data Insiders / 4.12.2024
Towards software-defined vehicles: rethinking the automotive industry
The automotive industry is reaching a turning point. In our Data Insiders podcast, we discuss its potential futures with Elektrobit’s Moritz Neukirchner and Tietoevry Create’s Piotr Romanowski.