On 13 May, an audience of several hundred came to Vippa for nine 30-minute panels on AI and robotics in practice. The framing question was simple: how do you create value with AI and robotics in 2026, once you've stripped out the demos?
The Antire AI Survey we built the day around shows where we sit. AI is rated 6.5 out of 10 strategically across the 35 Scandinavian organizations we interviewed. 56% use it only in support functions. 24% can show a measurable effect above 5%. None can show above 20%. 56% name culture as the biggest barrier ahead of leadership, talent, funding and regulation.
As moderator, the through-line for me was this: ambition is no longer the bottleneck in Norway. Cultural capacity to absorb the technology is. Most panels arrived at that point from different angles.
Christopher Frenning (Microsoft) and Wilfred Østgulen (National Library) opened the day. The question was why some AI pilots scale and others quietly stall.
Pilot and production are different problems. In pilot mode you ask whether the model can do something impressive. In production the questions are harder. Can we trust it? Can we govern it? Can we afford it at scale? Does it work in Norwegian? Does it work on our data? Does it improve a workflow that actually matters?
The point that stuck with me, and that several attendees pulled out of the day afterwards, was on language. Large global models are powerful, but they aren't automatically good enough for Norwegian, and considerably less so for smaller language communities like Sámi. That's a production constraint, not an academic one. Wilfred laid out what the National Library is doing about it: building their own open-source models adapted to Norwegian, because Norwegian organizations can't blindly trust models trained primarily on English. His closing framing captured the room: you can't scale what you don't trust, and you can't trust something that doesn't understand you.
Christopher's other contribution was framing AI as a democratizing force. The technology increases the power of each individual. More people can code, design, analyze and solve harder problems than they could two years ago. That only becomes value when organizations know how to absorb it. The pattern the panel kept returning to: AI works when top-down leadership meets bottom-up energy. Either alone doesn't get you there.
Anne Ruth Gjerstad and Jan Robert Heiberg from Posten Bring, with Audun Hoff from Oracle, took on whether data readiness is still the genuine blocker or has become a convenient excuse. The answer wasn't binary.
The survey gives Scandinavian organizations an average data foundation score of 5.3 out of 10. Leaders we interviewed pointed repeatedly to data quality, ownership and accessibility as binding constraints. The panel agreed those constraints exist.
The harder point from the discussion was that perfect data isn't the goal. The teams making progress build momentum by making data usable in specific workflows, with domain experts close to the problem. Data foundations matter but treating them as the prerequisite gate becomes the excuse for never shipping.
The line worth carrying out of the session: AI makes the absence of data discipline visible faster than anything else.
This session focused on Antire’s AI Survey, and to comment on the findings we got the help from Anne Lisæth Schøyen (Deputy Director IT Development at The Norwegian Offshore Directorate) and Geir Inge Stokke (CEO, Norwegian Road Federation) alongside her. The framing question was whether Norwegian organizations are as far along on AI as they think they are.
The honest answer is no. The survey shows strategic ambition with limited operational depth. 56% report AI in support functions only. 16% are still mostly in PowerPoint and pilot mode. None describe AI as broadly embedded in core operations.
The most important finding wasn't technical. 56% of respondents named culture as the biggest barrier to further AI adoption, ahead of leadership, talent, funding and regulation. That matched what came up through the rest of the day. AI is a leadership and culture problem with a technology component. Treating it as an IT project is one reason so many organizations are stuck.
After lunch we moved from AI models to robotics. The question was how to get from impressive robotic capabilities to operational workflows that teams can trust, integrate and scale.
A robot that performs in a demo isn't the same as a robot that creates measurable value in a warehouse, on a construction site, or in an industrial environment. Mathias Nedrebø (Six Robotics), Svein Kvernstuen (Remotion) and Jonas Neraal Jakobsen (AutoStore) came at this from different positions, and the friction that did surface was the genuine kind. Robots must interact with the physical world, and the physical world is messy. The hard part is workflow redesign. Doing the task once is usually the easy part.
Optimistic AI math is the dominant genre right now. Time saved becomes money saved. Faster drafts become productivity. Copilot seats become transformation. The panel asked what proving business value actually looks like once security, governance and production-grade complexity get added back in. And to explore this topic, we got help from Kjell Erik Hofland, SVP IT at Höegh Evi and Gaute Lien, CEO at Sicra AS
The survey shows how hard honest measurement is. 24% of respondents report measurable AI effect above 5%. None report above 20%. 19 out of 23 companies that gave numbers are still below 5%.
That doesn't mean AI isn't creating value. It means many organizations are still weak at measuring it honestly. Better questions to ask: did the workflow improve, did cost go down, did quality go up, did risk decrease, did customers or citizens experience something better? Counting how many people used AI doesn't answer any of those.
Silvija Seres and Ragnar Harper (AWS Norway) focused the conversation on culture, and Silvija reframed the panel almost as soon as it started. Her starting move: ask what AI can actually do for us, rather than how AI affects us.
The provocation that stayed with the room came a few minutes later. Put bluntly, Norway could see 500,000 more people on NAV within two years if we don't succeed in building AI competence across the population. Singapore and Finland are already further ahead. That number deserves attention and deserves scrutiny, and the room reacted to both. She stands behind it. Some people in the audience visibly didn't. That's the kind of disagreement the day needed and didn't always get.
Silvija's supporting framework: AI is roughly 10% technology, 20% algorithms and 70% people. The 70% is where the 500,000-on-NAV claim actually lives. The technology gets cheaper and more available every quarter. The algorithms improve almost as fast. The people side either learns to absorb that pace, or it doesn't.
Her case for what's possible if we do: 10x, 100x, maybe 1000x more productive workers across the economy. The flip side: an HR bomb for the people who aren't brought along. The dividing line is competence, not technology access.
The discussion with Ragnar that followed was about where human judgment still earns its keep. Humans own the result. The algorithm is the tool. That sounds simple until you try to apply it inside a workflow that has run on human judgment for thirty years. Sometimes people are needed for accountability, escalation, ethics, empathy and domain judgment. Other times "human in the loop" becomes an expensive way of preserving old processes. The challenge is redesigning work so human judgment lands where it actually changes the outcome.
Silvija closed on a reframe I think the best leaders in the room will carry into their next budget cycle: AI as a competence investment in people, with a technology component.
Robotics in Norway is both overhyped and underbuilt at the same time, and on this panel that came out clearly. We're probably overhyping humanoid robots in the short term, especially for complex household and open-world tasks. We're probably underbuilding practical robotics in industrial, logistics, infrastructure and operational environments where Norway has needs and competitive advantages.
The next five years will bring rapid progress. The winners won't be the teams with the most impressive demos. They'll be the ones combining functional hardware, reliable software, operational understanding and a clear business case, as demonstrated in this session by Stein H Danielsen, Co-Founder & Chief Solutions Officer at Cognite, Mia Norman, VP of Engineering at Wheel.me and Andreas Mollatt, Co-Founder at Physical Robotics AS
Lídice Nahomi González from ANIA, El Salvador's national AI agency, spoke from outside the Norwegian frame the rest of the day operated in. El Salvador has ambitious goals, and are willing to take a big bet on moving fast, in a way we might not be used to in Europe.
For public services, education, health and citizen-facing systems, people need to understand why AI is being used, how decisions get made, who carries responsibility, and what happens when something goes wrong. Explainability is hard. Trust can't be treated as an afterthought.
The El Salvador frame also reminded the room that AI can be a national transformation tool, not only an enterprise productivity tool. Used well, it can lift education, access to services and public-sector capacity. Used poorly, it deepens inequality and distrust.
The ARCH Fellowship 2026 cohort presented three projects, with Gard Thomassen (IT Director, University of Oslo) closing the day on the broader question.
Norway's skills gap is the urgent question. Several threads came together in the discussion: AI literacy, national capability, local models, open ecosystems, leadership responsibility, lifelong learning.
If AI increases the power of each person, then the countries that win are the ones that help their people use that power well. That requires a serious lifelong learning agenda for leaders, case workers, clinicians, analysts, operators, teachers and everyone whose work will change. Engineers and students are part of it, not the bulk of it.
The risk is a large segment of the workforce left outside the next productivity wave. Slower companies are the smaller version of the same problem.
A few things from the day worth holding onto for next year's conversation.
Silvija's 500,000-on-NAV provocation and her 10/20/70 framing are now in the room. Both can be argued with. Neither can be ignored. If she's roughly right, the public-sector capacity question has a much shorter clock than most of our institutions are running on, and El Salvador, Singapore and Finland are good places to look for what serious national work might look like.
Production is the test. Pilots prove that something is possible. Production proves whether it's useful, governable, and worth running.
Norwegian-language capability isn't optional for organizations that work in Norwegian. Wilfred's framing applies broadly: you can't scale what you don't trust. The National Library doing this work openly is a model others should be borrowing from.
Data foundations matter, but perfect data isn't the goal. The goal is usable, trusted data inside workflows that create value.
Culture is the bottleneck. The survey shows it. The day confirmed it. The limiting factor is rarely the model. It's whether people, leaders and organizations are ready to work differently.
Norway has advantages, but not unlimited time. Trust, strong institutions, industrial complexity, technical talent. None of those run on infinite patience.
For 2027, the conversation I want is about which organizations changed how they work, what it cost them, and what they wish they had known. We've been comparing future plans for three years. Past actions are the missing data.

.jpg?width=354&height=354&name=Image%20(80).jpg)