
As artificial intelligence becomes more prevalent across industrial value
chains, a new question is coming to the forefront: can organizations trust AI at scale?
As AI moves from experimentation into operational use, transparency, traceability, and
accountability are becoming essential requirements—especially in regulated
industries. Looking ahead to 2026, industrial AI is entering a more demanding phase
of maturity. In this context, Leon Lauritsen, CEO of Aras, explains why trust in AI
depends on strong data, governance, and context—and why PLM is emerging as a
critical foundation for scalable, decision-ready industrial AI.
Artificial intelligence is evolving from an experimental technology to an operational necessity.
Algorithms are becoming more capable, use cases are expanding, and AI is increasingly
embedded in industrial operations – from process optimization and predictive maintenance to
assisted engineering and decision support.
As AI becomes operational, organizations must be able to trust, explain, and stand behind the
decisions it helps inform. Demonstrating reliability, traceability, and accountability is becoming
just as important as accuracy, particularly in industrial environments where safety,
compliance, and responsibility matter.
From performance to proof: a paradigm shift for AI
For years, the primary question around enterprise data was whether the result was correct.
Today, an additional question is becoming unavoidable: why is this result correct?
Many AI systems deliver strong outputs, but how those results are produced is not always
clear. In critical industrial sectors – manufacturing, mobility, medical technology or defense –
this lack of transparency limits how far AI can be trusted. As AI plays a greater role in
operational and engineering decisions, expectations around security, regulatory compliance,
and legal responsibility increase.
In 2026, explainability, verifiability, and auditability will be central to scaling AI responsibly in
industrial environments.
European regulation: traceability becomes a condition for market access
This transformation is also being accelerated by the changes in the regulatory landscape. The
EU AI Act and initiatives such as the EU Digital Product Passport are raising expectations
around transparency, traceability, and accountability in the use of AI.
Organizations need to understand how AI-driven insights are produced, what data they are
based on, and who is accountable for the decisions they support. In this environment, AI
systems that cannot be traced or explained will struggle to deliver sustained business value –
and compliance is becoming a structural requirement that shapes how industrial digital
architectures are desined and operated.
PLM as the foundation of trusted AI
As these requirements converge, the question is no longer simply which AI tools to deploy,
but on what foundation they should run. Trust is not created by algorithms alone – it depends
on the quality, context, and governance of the data and processes that feed them. This is
where product lifecycle management (PLM) takes on a new role.
Beyond managing product development, PLM provides a connected, authoritative foundation
for product data, decisions, validations, tests, and changes across the lifecycle. That
continuity provides the context needed to explain and audit AI-supported decisions – an
essential capability in regulated, high-stakes environments.
AI only creates value when it is grounded in clear intent and a strong data foundation. Too
often, organizations lead with technology instead of clearly defining the decisions AI is meant
to support. When that happens, AI accelerates complexity rather than outcomes.
When data is siloed or governed inconsistently, AI amplifies unreliable insights, conflicting
decisions, and hidden risks. Governance by design ensures that AI supports human decision
making, remains explainable and traceable, and only learns from appropriately classified data.
Adaptive intelligence in support of human decision-making
As AI becomes operational, these demands place new expectations on PLM itself. PLM has
primarily served as a system of record. In an AI-driven environment, it needs to become a
system of guidance.
This is where adaptive intelligence comes into play. Rather than simply storing information,
PLM platforms must help organizations identify signal in the noise, illustrate potential impacts
before issues cascade, and align teams in real time. By continuously analyzing relationships
across data, processes, and decisions, adaptive intelligence supports earlier insight, clearer
trade-offs, and faster coordination as work unfolds.
This does not replace human decision-making – it strengthens it. Decisions remain in the
hands of experts, but they are informed by contextualized, reliable, and explainable
initelligence. The result is shorter decision cycles, lower coordination friction, and PLM
solutions that can evolve at the pace of the busines.
An enabler of innovation
In 2026, the direction is clear. AI is moving into daily operations and organizations must
integrate it thoughtfully into daily business processes. Operational AI requires platforms that
can interpret, anticipate, and respond as work happens.
Organizations that invest in governed, traceable data architectures and decision-ready
platforms will be better positioned to operationalize AI with confidence. Transparency will not
hold innovation back—it will enable it.


