Skip to main content
Menu
Blog

Can you actually trust the data residing in your IT landscape?

Digital-business6_Epical.jpg
Fredric Travaglia.png

Author: Fredric Travaglia, Business Architect, Epical

 

Do you lie awake at night wondering whether you can actually trust the data residing in your IT landscape? Maybe you should. 
 
Do you know if someone has been tampering with your systems of record, not enough to trigger alarms but just enough to create cascading issues in production, distribution, or reporting?
 
This is written with a bit of a tongue‑in‑cheek tone to jostle your nerves and get your attention — but the truth is that the phenomenon of data “disturbances” has recently become more common, especially in connection with ransomware attacks or similar intrusions.
 
Typically, attackers encrypt or lock down critical data sources for ransom. But when the victim enterprise pays to get the data back, some things may have changed, making practically all of the data corrupt or untrustworthy. Some of it may be possible to verify algorithmically across sources and systems, but that is a huge undertaking and very resource‑ and time‑consuming.

What can be done to pre‑empt and avoid such risks?

Text

Working proactively with threat detection, penetration testing, identity and access management strategy and architecture, etc., is “of course” Enterprise Security 101 — but there may also be ways of boosting data integrity and resilience within the IT landscape itself.

Observability, monitoring, and logging of transactions across the enterprise estate can act as a separate source of “truth” to independently verify data through logs and support data integrity checks — in addition to identifying signs of unauthorized access, of course.

Here are a few suggestions on how to strengthen resilience and build for trust, rather than recovery.

1) Assume breach, then design for verifiability (not just security)

Text

Let’s say you did everything “right”: MFA everywhere, least privilege, tuned EDR, SOC on speed-dial. Great. Now answer the more uncomfortable question: if an attacker still got in, could you prove what changed and when? 
 
Most enterprises can detect outages faster than they can detect subtle, surgical edits to master data, pricing tables, BOMs, bank details, or inventory parameters.
 
So, in addition to prevention, you need a posture of continuous verifiability: controls and telemetry that make your critical data auditable by construction. This shifts the problem from:
 
“Did something happen?” 
to 
“What exactly happened, to which records, via which identity, from which endpoint, and what downstream processes consumed it?”

2) Build an “append‑only truth” layer for critical transactions

Text

If your systems of record can be altered, you want at least one parallel system of evidence that is hard to rewrite.

Think: append‑only event streams, immutable log stores, WORM‑capable object storage, or cryptographically chained audit logs.
Not because you love compliance, but because you love not rebuilding your ERP by hand.

For high‑value domains (orders, shipments, invoices, payments, product master updates), capture “before/after” transaction facts as events and store them where retroactive tampering is materially harder.

When the day comes, you can replay, reconcile, and verify. Bonus: you also get better forensics, faster incident response, and a much easier time proving to insurers that you’re not improvising.

3) Treat “data integrity” like a production Service Level Objective — with anomaly budgets

Text

Most organisations have uptime SLOs. Some have latency SLOs. Almost none have data integrity SLOs, which is why data issues are often discovered via angry emails or end‑of‑quarter panic.

Define measurable integrity indicators for key datasets, such as:

  • unexpected drift in totals (inventory, revenue, payables/receivables)
  • unusual change rates (supplier bank account updates, price overrides)
  • schema or reference integrity violations (orphaned keys, broken hierarchies)
  • distribution shifts (product category mix, customer segments, shipping lanes)

Then monitor them like you monitor CPU: alert on statistically unlikely changes, not just hard failures. Give yourself an “anomaly budget” and investigate when you burn through it too quickly. This is where observability stops being a dashboard hobby and becomes operational survival.

4) Make recovery about clean data, not just restored systems

Text

Traditional DR thinking: “Can we restore from backup?”
Modern ransomware reality: “Can we restore to a point in time we can trust?”

A resilient approach includes:

  • clean-room recovery: isolated environments where you restore, validate, and only then reintroduce data

  • integrity gating: automated reconciliation checks that must pass before data reenters production

  • known-good snapshots of “golden” reference data (customer, supplier, product master) with signed hashes to quickly detect drift

Recovery drills should include data validation, not just infrastructure restoration.
If you’ve never practiced proving your restored ledger balances across systems, you haven’t practiced recovery — you’ve practiced reinstalling.
 
To paraphrase James Clear’s Atomic Habits:
When you’re under pressure, you won’t rise to the level of your security policies; you’ll fall to the level of your recovery rehearsals.
 
When it comes to building verifiability into the enterprise information supply chain, observability is a critical capability — not just for uptime, but for trust.
Epical & Nodinite.png

How to get started: taking the next step forward

Through our long-standing partnership, Epical and Nodinite help organizations monitor, trace, and validate business-critical transactions across complex IT landscapes. By providing end-to-end visibility across integrations, APIs, middleware, and message flows, Nodinite gives customers an independent source of operational truth: what happened, when it happened, what data moved, and which downstream systems were affected. 

This makes it possible to detect abnormal transaction patterns earlier, strengthen auditability, and support recovery scenarios where the key question is no longer “Can we restore systems?” but “Can we trust the restored data?” 

Together, we help Nordic enterprises improve resilience, automate and secure their information flows, and build an integration foundation designed not only for recovery — but for confidence.

Recently, the Swedish Civil Defence and Resilience Agency distributed a brochure with recommendations on resilience and preparedness. I shared some reflections on this in my LinkedIn post. We can all do our part to improve and prepare for the future.

A first step toward addressing these challenges starts with understanding your current state, whether through an initial call, a discovery exercise, or a mapping workshop together with us.

This is one of those areas where a proactive stance, thinking ahead, and avoiding pitfalls by leaning on the experience of those who have been through the trenches before, can be a 10x or even 100x multiplier.

 

Fredric Travaglia
Business Architect, Epical
 
We’re happy to help and can take you to the next stage regardless of your current state of maturity and readiness.

Contact us

Contact
Tomas Jinneklint, Enfo
Contact
Tomas Jinneklint
Area
Sales Sweden
Phone
+46 70 962 6385
Mikko Punakivi, Epical
Contact
Mikko Punakivi
Area
Sales Finland
Phone
+358 40 568 4808
Share