As quantum computing advances, the conversation around post-quantum risk is moving from theoretical to practical. This shift is being driven by the reality that machine learning pipelines were never designed to keep data private. As adversaries advance and quantum-era threats approach, enterprises can no longer afford to delay how they think about data security.
In the following interview, Anita Oehley and Jeremy Samuelson from Integrated Quantum Technologies discuss what it means to be post-quantum ready.
What does it mean to be post-quantum ready, and why should we care about this now?
Samuelson: Being post-quantum ready is not a research exercise or a future IT upgrade. It’s a question of risk ownership and long-term resilience. It means knowing the inner workings of your operations today, including which data and AI assets need to remain secure over the long term, and whether your infrastructure can adapt as regulatory standards and threats evolve. As AI becomes embedded across the enterprise, visibility into how data are aggregated and reused downstream will also be required.
The reason this matters now is that the exposure window is already open. Data collected today can be stored indefinitely and exploited later. Most enterprise architectures are not designed to govern downstream AI use of data or to protect long-lived information across complex, distributed environments. Waiting for quantum computing to fully arrive shifts risk forward — it doesn’t eliminate it. And let’s not forget that AI systems continue to increase the strategic and financial value of large datasets in the present.
What are the biggest challenges we face today?
Oehley: AI Literacy.
AI has been part of business operations for decades, but we’ve now entered the era of modern AI, where its impact is immediate and transformative. The real question for organizations is how to implement the right guardrails so they can harness AI’s full potential without compromising security. Many enterprises are still working to strike the right balance between rapid innovation and responsible deployment, which often slows progress. That caution is understandable: leaders must protect their data while also ensuring that significant AI investments generate real value. AI literacy is the key to resolving this tension, empowering teams to innovate with confidence while managing risk effectively.
How is Integrated Quantum Technologies helping solve these challenges?
Samuelson: Integrated Quantum Technologies is addressing these challenges by building security into the foundation of modern AI infrastructure, rather than layering it on as an afterthought. IQT’s approach centers on protecting data, models, and workflows throughout their entire lifecycle.
IQT’s solutions focus on three key priorities:
- Secure and Resilient AI Workflows: IQT’s technology is designed to protect sensitive data not just at rest, but as it moves through development, training, and production environments — where most AI risk actually emerges. By ensuring that data governance and control extend into downstream AI use, organizations can innovate with confidence without exposing valuable assets to unintended reuse or attack vectors.
- Quantum-Resilient Architecture: With quantum computing on the horizon, traditional cryptographic protections may no longer be sufficient. IQT is future-proofing enterprise infrastructure by embedding quantum-aware design principles and enabling agility so that enterprises can maintain trust in their systems as the computing landscape shifts.
- AI Literacy and Operational Confidence: Beyond technology, we’re working to help leaders translate abstract risk into actionable strategy. Our approach supports greater AI literacy — helping organizations understand not just how to deploy AI, but how to protect it and govern it responsibly in a way that doesn’t compromise business outcomes.
In doing so, we help enterprises pursue innovation without sacrificing security, enabling them to capture the full value of AI investments today while managing the long-term risks associated with data use and emerging computing paradigms.
How is AI fundamentally changing the threat landscape — not hypothetically, but in practical, operational terms?
Oehley: The reality is that this challenge is only going to intensify. Threats are becoming more sophisticated, and bad actors will always move faster than the defenses trying to stop them. That’s why CISOs now have a permanent seat at the table — security is no longer a technical issue, it’s a core business risk.
At the same time, organizations are operating in an increasingly complex regulatory environment. In the U.S. alone, privacy and compliance requirements vary by state, while European regulations add another layer entirely. Keeping up requires deep coordination with privacy and legal experts and significant internal resources — something many organizations are struggling to sustain.
Importantly, this isn’t just a problem for large enterprises. Company size doesn’t determine impact. Smaller organizations can suffer devastating brand damage from a single incident, as high-profile cases have shown, and recovery can be extremely difficult. Larger organizations may have more resources to absorb the cost, but the stakes — financial, reputational, and operational — are often even higher. In today’s environment, no organization is insulated from the consequences of getting security wrong.
What do companies need to do to improve their current situation?
Samuelson: As privacy rules evolve worldwide and regulators continue to scrutinize AI data practices more closely, enterprises that treat privacy as an architecture problem win on both compliance and agility. Data governance must live in the stack, not in spreadsheets, and modern privacy approaches must be measurable, automated, and built on provable controls rather than policy statements.

