Do you Believe in the Data Fairies?

This is an updated version of a blog I first wrote back in August 2012. Fourteen years on, I'm genuinely surprised by how little has changed — and how much more urgent the problem has become.

Picture the scene: you arrive at work, open your laptop, and settle down to do some important work, confident that the data you need will be ready and waiting for you. But here's my question: do you know how that data gets there, and where it comes from?

Over the years I've come across countless situations that run something like this:

Team A: Our data is loaded up by IT.

IT: No, we don't touch that data — it's a manual load by Team B.

Team B: We just send the spreadsheet to Team A. We're sure they load it.

Team A: No, we really don't load that data…

My friend and colleague Justin York coined the phrase "data fairies" for precisely these situations, and it describes them perfectly. For many people, the data simply appears and whilst they may not literally believe in the data fairies, they have no idea how they get their data, or sometimes even where it comes from. Sadly, they often don't care either.

I was lucky enough to hear Peter Aiken speak years ago, and he summed the situation up with a brilliant analogy: most people think about data the same way they think about air. They don't think about it at all. They just assume it will always be there, and always be good enough to use. It's only when something goes wrong that they stop to consider the quality of what they're working with.

And I'd add: even then, do they think about where the data comes from?

All too often, the answer is no. Having been forced to acknowledge a data quality problem, the usual reaction is a tactical fix of the data in front of you. After all, if you don't know where the data came from or how it got there, how could you possibly consider a more permanent solution? And so begins a cycle of constant data cleanses and patches that quietly become part of the normal way of working — nobody questions them, they just become "the process."

This is why data lineage (knowing where your data comes from and what happens to it on its journey to you) matters so much. It allows problems to be traced back to their source, and permanent solutions to be properly considered.

Now here's where 2026 changes everything.

When I wrote this post in 2012, the stakes of not knowing your data lineage were significant but manageable. Poor decisions, regulatory headaches, wasted hours fixing bad reports. Not ideal, but survivable.

In the age of Gen AI, the stakes are categorically higher.

Organisations are now feeding data into LLMs to drive automated decisions, about customers, about risk, about operations. If you don't know where that data comes from, you cannot know whether your AI is trustworthy. Garbage in, garbage out has always been true, but when the "out" is an automated lending decision, a medical recommendation, or a fraud flag, the consequences are very different from a slightly wrong internal report.

There's a cruel irony here too. Many organisations are turning to AI to help with data quality and data lineage. It can genuinely provide some help for these activities, but AI cannot magic good data governance into existence. It can help you document lineage faster, flag anomalies more quickly, and surface patterns a human might miss. What it cannot do is substitute for knowing, at a fundamental level, what your data is and who is responsible for it.

The data fairies are not just still with us, they're now feeding the models.

My advice remains the same as it was in 2012, but it's more urgent: take an iterative approach to building up your data lineage. Use every data quality incident as an opportunity to document what you learn. Build a repository, piece by piece. And critically, make sure your AI governance and your data governance are not two separate conversations - because they cannot be.

The data fairies aren't going away on their own. In the age of Gen AI, understanding your data has never mattered more. My book Effective Data Governance (Kogan Page, 2026) gives you the framework to tackle all of this. It's available now.

Comment

Why Your Executives Need to Hear This Before Your Next AI Project

AI conversations are happening in most organisations right now, and if you work in data governance, you're probably right in the middle of it.

AI is everywhere. Executives are excited, and rightly so. The possibilities are genuinely exciting: new services, reduced costs, greater efficiencies, things that simply weren't possible before. It's shiny, it's fast-moving, and there's enormous pressure to get on with it.

And then there's you. The data governance professional. Quietly clearing your throat and saying, "But wait..."

I get it. I really do. You’re not wrong to have concerns, but the way we raise those concerns matters enormously. If we're not careful, we become the naysayers. The people who slow things down. The ones standing in the way of progress with a clipboard and a risk register.

That's not who we are. And it's not who we need to be.

Stop Talking About Data Governance

I know that sounds counterintuitive, but hear me out.

Nobody (and I mean nobody) wakes up wanting to do data governance for the sake of data governance. What they want is for their AI initiative to be a success. They want the business outcomes that AI promises to deliver. So if we walk into a conversation leading with "we need to do data governance first," we've already lost the room.

Instead, start with the AI initiative itself. What is your organisation actually trying to achieve? What problem is this first AI project solving? Who are the stakeholders driving it? Go and talk to them, not to slow things down, but to understand what they're building.

Then work backwards. What data does that AI need to function? Is that data good enough? Do you know where it lives? Is it documented? Are there known quality issues? Could it be coming from multiple sources that need to be pulled together?

That's your story. Not "we need data governance." But "here's why your AI project won't deliver what you're hoping for unless we address these specific data issues."

Make It Specific. Make It About Outcomes.

The more specific you can be, the more persuasive you'll be. Vague warnings about data quality don't land. But "this AI initiative is designed to improve customer response times, and the customer data it will rely on has known duplication issues that will undermine the results" — that lands.

Tie every data governance conversation directly to the outcome the AI is supposed to deliver. If the data isn't fit for purpose, the AI won't achieve what the business is counting on. Say that clearly, calmly, and with evidence where you can.

Be Pragmatic, Not Perfect

Here's the other trap to avoid: if you do get the green light to work on data governance alongside the AI project, don't immediately demand that everything be perfect before anything moves forward.

That's a fast track to being sidelined.

Instead, think in phases. What's the minimum we need to get right for the testing phase? What can we improve incrementally as the project progresses? How do we raise the bar over time rather than trying to boil the ocean before day one?

Pragmatism isn't compromising your standards. It's recognising that progress beats paralysis, and that building trust with your AI colleagues is how you earn a permanent seat at the table.

Be the Enabler, Not the Obstacle

The shift I'd encourage every data governance professional to make is this: stop positioning yourself as the person who might block the AI project, and start positioning yourself as the person who will help it succeed.

Because that's the truth. Good data governance isn't the enemy of AI. It's what gives AI the foundation it needs to actually deliver. When the data is right, the AI works. When the AI works, the business gets its outcomes. And you were part of making that happen.

That's a very different conversation from "don't do AI without data governance."

Want to learn more about AI and Data Governance? Book a call with me below!

1 Comment

Guest Blog from Helen Cullis - Why Having ‘Happy Data’ is the Key to Your Organisation’s Success

One of the things I've always found most valuable when explaining Data Governance is a good analogy. They help bridge that gap between abstract concepts and everyday understanding, making Data Governance feel less like technical jargon and more like common sense. Helen Cullis’s take on "happy data" does exactly this, and I think you're going to love her puppy analogy as much as I did when she first shared it with me.

Over to Helen..

When I talk about happy data, I’m not trying to introduce anything revolutionary. This is really about a way of framing Data Governance that helps people across an organisation understand why data matters, why they should care about it, and what “good” actually looks like.

Because let’s be honest - in modern organisations, data is hard to see. It’s spread across cloud platforms, tools, pipelines and models. Unless it’s visibly breaking something or slowing people down, most people don’t feel connected to it in a meaningful way.

And that’s a problem.

So… what is happy data?

To explain this, I like to use an analogy: adopting a rescue puppy
(If you’re more of a cat or reptile person, don’t worry - the idea still works!)

When you first adopt a rescue puppy, it might be sad, unloved or untrusting. It may have health issues, bad habits or behavioural problems. But with the right care, attention and consistency, that same puppy can become healthy, confident and a much-loved member of the family.

Data is no different.

I came up with the idea of happy data when I was trying to find better ways to get people to care about the data they own - especially those in roles where working directly with databases or data pipelines isn’t part of their day-to-day work. Personifying data helps people understand both the current state of their data and what Data Governance was trying to achieve.

When I’m working with organisations at the start of a Data Governance journey, people often ask:
“But what exactly do you need me to do?”

Framing it as “making the data happier” works surprisingly well. I’ll often suggest that people ask themselves at the end of each week:
Have I done something to make the data happier?
Over time, that simple question helps turn data governance into a habit - small, consistent actions that people can feel good about, even when the progress feels incremental.

So, back to our rescue puppy analogy. What does happy data look like?

  • Clean, high-quality data
    The puppy has had flea treatments, trimmed claws and regular baths and grooming.

  • Data that is actively looked after
    Preventative and remedial activities protect quality both in pets and in data! Like feeding the puppy properly, taking it for walks and regular vet check-ups.

  • Trusted, valuable data
    The puppy is well trained, has good recall, and adores the family.

Why does happy data matter?

Unhappy data causes real problems.

It makes organisations inefficient, creating manual workarounds, broken processes and delays. In some cases, data arrives too late to meet key deadlines. This is the equivalent of the puppy chewing your shoes or destroying the furniture.

Unhappy data also leads to poor decision-making. Genuine data-driven decisions are impossible without trusted data. In fact, leaders may end up relying more on experience and instinct than on low-quality, unreliable data.

There’s also a lost opportunity cost. With happier data, organisations can move faster, spot trends earlier and gain a competitive edge. This is becoming even more critical as AI and advanced analytics rely on high-quality, well-governed data to deliver meaningful outcomes.

Happy data also enables data democratisation. When data is trusted, well understood and appropriately governed, it can be shared more widely across the organisation. Combined with the right skills and training, this unlocks effective self-service and innovation.

Finally, there’s risk and compliance. While it shouldn’t be the only driver for Data Governance, it’s still important - like buying pet insurance or using a good harness or lead to keep the puppy safe.

How do you make your data happier?

There are three equally important pillars:

  • Data Quality

  • Data Culture

  • Data Governance

They’re deeply interconnected, and you need all three. You can have excellent data quality controls and governance frameworks, but if people don’t care about data - if the data culture is poor - you’ll never realise the full value. Data culture is often overlooked - not because it isn’t important, but because it’s harder to define and takes longer to change. It’s a slow burn.

Back to our puppy:

  • Have you created a safe environment where it can thrive?
    (Do you have the right systems and tools in place?)

  • Does it have fleas that need treating?
    (Have you cleaned up poor-quality or obsolete data?)

  • Are you doing regular vet check-ups?
    (Do you do preventative and remedial data quality activities?)

  • Are the walks interesting and well planned?
    (Do your data flows make sense?)

  • Can the puppy be trusted?
    (Is your data easy to find, understand and reliable?)

  • Have you read up and taken professional advice?
    (Do people have the skills and knowledge to fulfil their Data Governance responsibilities?)

  • Has it had training classes?
    (Do metadata and governance create trust in what the data is telling you - especially when feeding AI models?)

  • Did you buy pet insurance?
    (Are the right controls in place?)

A final thought

Happy data doesn’t happen by accident. It’s the result of many small, shared, consistent actions across quality, culture and governance.

So here’s my challenge: what could you do this week to make your data happier?

If you’d like to continue the conversation, feel free to connect with Helen on LinkedIn.

Helen is a Data Governance specialist with a real knack for bringing strategy and execution together. With extensive experience in leadership and data in complex organisations, she brings a collaborative, stakeholder-focused approach that combines technical knowledge with the ability to engage, influence, and drive meaningful change. She empowers organisations to get real value from data and confidently take advantage of transformative technologies like AI. What really stands out about her is her ability to build trust and strong relationships at every level. She has worked across the healthcare, higher education, and financial services sectors and is passionate about diversity and inclusion.

Comment

Happy New Year - 2025 Roundup

Happy New Year and hello, 2026!

I hope you’ve all had a wonderfully festive season and have emerged from your mince pie-induced comas ready to take 2026 by storm. 

As we go into the new year, it’s become a little tradition of mine to share my top 10 most popular blogs from the previous year. It’s always fascinating to see which topics resonated most with readers - sometimes it’s the posts I expected, other times it’s a surprise!

Not only is it fascinating for me, but I think it’s quite helpful for readers to see which blogs have been hot topics. 

So, without further ado, here are the top 10 reads from 2025:

  1. Do You Need a Data Strategy and a Data Governance Strategy?

  2. Data Governance on a Shoestring Budget

  3. Who is Responsible for Drafting Data Definitions?

  4. What is a Data Governance Roadmap?

  5. Why Every Organisation Needs a Data Governance Council

  6. Data Governance Interview with Jane Meharg

  7. What Problems Do Data Governance Councils and Domain Forums Actually Solve?

  8. 15 Ways I Can Transform Your Data Governance Journey

  9. The Relationship Between Data Governance and Data Quality

  10. The Backstory of Data Governance and My Path Alongside It

I hope this roundup gives you something useful to revisit, or even discover for the first time.

If you need any further support with your Data Governance initiative, why not hop on a free call with me? We can discuss your organisation’s roadmap together.

Comment

The Tale of Dick Whittington and the Missing Data

Christmas is fast approaching, and it’s safe to say the season wouldn’t be complete without a few cheeky jokes, a whole lot of glitter, and an enthusiastic crowd shouting “It’s behind you!” at the top of their lungs. 

For those of you who have no idea what I’m talking about, a pantomime is a musical stage production which takes traditional fairy tales and retells them with larger-than-life characters, slapstick comedy, jokes, gags and a big baddie we can all boo at. They’re a staple of the British festive season, and, believe it or not, the perfect metaphor for data gone wrong. 

So, sit back, relax, and grab your popcorn, because today I’ll be weaving my own little tale, featuring our fortuitous hero Dick Whittington…

Once upon a time… there was a man called Dick Whittington. He was a humble man. Loyal, dependable and hard-working - he loved his job at the Dense Doughnut Bakery. One day, his boss, Mr Dense, tasked him with finding a way to improve customer experience within their organisation. And this, my friends, is where our story begins…

Our humble Dick set out on his journey to improve customer experience. Being the eager employee that he was - and desperate to please Mr Dense - Dick went looking for a magical and instant solution. He trawled and trawled the internet but was getting nowhere fast. Distracted, he decided he would start his Christmas shopping, and headed to eBay. There, he discovered a listing for a magic lamp (as you do), which he thought his dear wife would love. And, not one to hang around, Dick clicked ‘buy it now’ and selected express shipping. 

When his magic lamp arrived, Dick polished it furiously and to his surprise out popped a Genie, who offered him three wishes. 

“Aha!” exclaimed Dick. “That is the answer to all my problems! I will use these wishes to improve customer experience at Dense Doughnuts and still have the lamp to give my wife for Christmas!” 

So, Dick set about making his first wish and he asked for lots of shiny new technology tools to help him manage his customer experience. Secondly, he wished for an instantaneous data migration from their old systems to put the data into his shiny new systems. And thirdly, he wished for a self-service portal, so that customers could access their own records and manage them themselves. 

These were good wishes, but the old saying is true ‘you get what you ask for’. It was a disaster. The Genie hadn’t thought about the data that was going into these tools. And why would he… he’s a magical Genie, not a data scientist! 

Dick couldn’t launch his online portal because the customer data was so very poor, and they didn't want their customers to know how bad it was - that would’ve had the opposite effect of improving customer experience!  Even worse, half the data they thought they had about their customers didn't appear in the shiny new tools.  So, Dick had to go on a hunt to find the missing data. 

He searched high and low, low and high shouting at the top of his lungs: “Data, data - where are you?” 

“It’s behind you!” echoed the mysterious chorus of voices. 

“It’s behind you, it’s behind you… IT’S BEHIND YOU!” the voices continued to chant. 

As the voices grew louder, Dick began to question his sanity, as every time he turned around… the missing data was gone (a classic panto gag, if ever I’ve heard one). Poor Dick Whittington was on a wild goose chase.

Feeling lost and confused Dick cried out: “Oh, woe is me. If only there was one place I could go to see what data we have in this company and where I can find it. If only we had a catalogue of Data. That’s what I should have wished for.”

After hours and hours of searching, Dick finally found some data that he thought might do the job. But, he was worried of repeating past mistakes, or using the wrong data and damaging the customer experience. Dick wasn’t sure what to do. He shouted: “IS THIS DATA GOOD ENOUGH?!” 

“Oh, yes, it is!” shouted the mysterious chorus. Quickly followed by “Oh no, it isn’t!” This went on for another hour. 

“Oh, yes, it is!”

“Oh no, it isn’t!”

“Oh, yes, it is!”

“Oh no, it isn’t!”

Uh, oh, here we go again! Now, poor Dick was more confused than ever! Sherry Trifle, (the resident know it all) spotted poor confused Dick and came over to see if she could lend a helping hand.

She said: “Well if you're not sure, do you really dare use this data?” 

“But I’ve wasted hours searching and this is all I have to show for it - I have to do something!” 

“What if you find out that it’s really bad quality, or even worse that it's the wrong data and we make the wrong decisions on it?” replied Sherry. 

“If only there was some way I could know if the data was good enough to use” replied Dick.

Sherry said she knew someone who could help, and introduced Dick to a wonky looking chap with a big basket of dirty data. 

“Hello! I’m Wishy Washy - pleased to meet you! That looks like a lot of data you have there. I could cleanse that for you if you like?” he offered.

Dick wasn’t really sure if it was the right thing to do, but he was running out of options. So, with great apprehension, he took Wishy-Washy up on his offer and followed him to the laundry. 

The laundry was full of hustle and bustle. It was noisy with machines spinning round and round cleansing data and making it ‘better’. Dick asked Wishy how they knew what they were doing and how they would figure out which parts of his data were good or bad. 

“I don't know - we just put it in the washing machines, and it comes out clean!” replied Wishy. Dick was worried. And he was right to be. When Dick got his data back it was definitely different. But he was still not convinced that it was right.

It turns out Dick was correct. As it goes, all the wishful thinking in the world couldn’t fix his data.

It all got too much, and Dick sank to the ground. He felt utterly hopeless and defeated, and wondered why on earth Mr Dense chose him to take on this project to improve customer experience…

But Dick was about to undergo his most transformative experience yet. 

Suddenly - POOF! In a big puff of sparkly fog, Dick’s Data Fairy Godmother appeared! She explained to Dick that all the things he’d wished for from the Genie were wrong.

BUT, all hope was not lost, as the things he'd wished for during his journey (one place to document all the data and where it’s held, a way of knowing how good or bad the data is, and finally, a sensible way to fix bad data) were the wishes he should have asked for to begin with. And the most important part? He didn't need a Data Fairy Godmother or a Genie in a lamp to give him those wishes – he could simply start a Data Governance initiative!

So, with the right processes, integrity, and accountability, Dick’s business went on to live happily ever after.

The END!

Now you see, pantomimes can be fun to visit once a year but are a long, drawn-out nightmare to live in. Don’t be like Dick and don’t make the wrong data wishes! Avoid working in a data pantomime, implement Data Governance and remember Data Governance is not a project - Data Governance is for life, not just for Christmas!

1 Comment

Making Data Governance Processes Accessible

One of the most practical questions I encounter in data governance is deceptively simple: how should we document our processes? The answer reveals a crucial insight about implementation success, as formal documentation and user friendly communication require very different approaches.

The Appeal of Swim Lane Diagrams

My early career in banking introduced me to BPMN (Business Process Notation Modelling), which produces swim lane diagrams. These structured visual representations organise processes into logical rows, with each lane representing a specific role. The flow shows precisely who does what and how handoffs occur between parties.

For those with analytical inclinations, swim lane diagrams offer clarity and logical precision. My work with BPMN specialists, at the first consultancy I worked for, reinforced their value as robust documentation tools. When executed properly, they capture process complexity with remarkable accuracy.

The Documentation Dilemma

However, there's a challenge: swim lane diagrams are divisive. They're what I'd call a "Marmite thing". You either love them or you hate them and find them impenetrable. Even before specialising in data governance, I appreciated their structure. However, I've learnt that many business users don't share this enthusiasm.

This creates a fundamental tension in process documentation. The formats that satisfy governance requirements often fail to resonate with the people who must actually follow these processes.

A Dual Track Documentation Strategy

I recommend an approach which recognises both requirements:

Formal Documentation Layer: Document data governance processes using your organisation's standard methodology. If your organisation already employs swim lane diagrams organisation wide, continue that practice. If alternative formats are standard, adopt those instead. Consistency with organisational norms, is important in everything we do as it eliminates unnecessary friction.

Communication Layer: Transform formally documented processes into simplified, accessible formats for business users. Create straightforward visual diagrams that fit on a single PowerPoint slide. These high-level representations help people grasp the essential flow without overwhelming detail. And if a business stakeholders asks for more detail you can then share the formal process documentation.

This dual track approach has proven remarkably effective. In my experience, business users consistently respond positively to simple pictorial diagrams that distil complex processes into digestible visual narratives, but you have the formal documentation in place for the times when it is required.

Practical Implementation

The workflow becomes: document rigorously using established organisational standards, then translate that documentation into user friendly communication materials. Your formal documentation satisfies compliance, audit and operational requirements. Your simplified diagrams drive adoption and understanding.

This distinction matters because data governance succeeds only when people actually follow the processes you've designed. Perfect documentation that nobody uses delivers no value at all!

Effective process documentation isn't about choosing between rigorous and accessible. It's about maintaining both layers simultaneously, each serving its distinct purpose.

If you'd like support developing process documentation that works for your organisation, book a call with me using the button below.

Comment

Oui Chef!

I’m thrilled to share this guest blog from Hyatt Saba, Data Governance Manager at Sage. I always enjoy my conversations with Hyatt about all things Data Governance - she has such a great way of bringing the topic to life through clever analogies.

In this blog, Hyatt draws on the world of professional kitchens to show how clear accountability and defined responsibilities are the key ingredients for successful Data Governance.

It's Saturday night and in the thick of dinner service at a popular new restaurant, table 7, order 2 raspberry and white chocolate soufflés to finish off their wonderful meal in celebration of their wedding anniversary. The Chef de Partie turns the oven dial to 200° C, she weighs the flour to precision, and lovingly folds in the top-quality ingredients. She pours the batter into the pots and opens the oven door to deposit her perfectly prepared desserts, only to discover that the oven is stone cold.

Aargh! What now? Well, it’s fair to assume that the majority of dessert services are out for the night. Let’s hope that the diners enjoy ice cream!

The hot potato...

So, whose responsibility is it to ensure the oven works? Is it the Chef de Partie who uses the oven and couldn’t fulfil her order? Is it the Head Chef who is accountable for the kitchen’s machinery and meeting financial targets (which will be negatively impacted from the inability to serve desserts) or was it the Sous Chef who is responsible for scheduling periodic maintenance?

Food for thought

No doubt, in a kitchen environment there might be some choice words - or knives thrown in the heat of the moment, but in the spirit of getting the kitchen back on track, it’s worth resisting the temptation to assign blame and instead get to the bottom of how the oven not working has fallen through the gaps to mitigate its recurrence. It’s of critical importance to understand accountability, and ensuring roles and responsibilities are in place for the kitchen to run more smoothly in the future rather than making assumptions, who ultimately should have checked the oven was working that day?

The same applies for data management. The purpose of assigning accountability and definite roles is to provide clarity and steer the management of data to avoid such situations from occurring.

Without clearly defined accountability and responsibility, things are left open to chance and are at risk of not consistently being done. We can easily assume that someone else will do it, or that it's someone else's job and that might be ok when everything is running smoothly, however as highlighted in the above scenario, what happens when luck runs out and things go wrong?

A recipe for success: Data Governance assigned roles

Taking chance out of the equation, reducing risk, and introducing rigor and quality are at the core of Data Governance with establishing and surfacing the accountability and responsibility of our data assets being a fundamental part of that. We do so through engaging with stakeholders and ascertaining how data is managed and who the decision makers are.

In the practice of Data Governance several roles exist, in this article, we will explore the  following; Data Owner, Lead Data Steward and Data Steward.

Applying that to avoiding too many cooks in the kitchen:

It's important to know who is accountable for data and who is responsible for its upkeep - all doing the same thing without clear definition and agreement can cause confusion, duplication and the risk of things being missed. Governance is the answer to providing clarity, transparency and trackability: I think this concept bears a great likeness to the formal roles we all know exist in a restaurant kitchen.

  • In a restaurant we have a Head Chef who is accountable for overall kitchen management, menu creation, and culinary direction. A Head Chef may not have personally cooked the dish, but they are still accountable for every plate served. It needs to be what it says on the menu, allergies respected, food safety adhered to at each step of the preparation process, consistent portion control and presentation, whilst ensuring profitability and timeliness. While a Head Chef might not necessarily have held a pan in the preparation of the meal, they hold ultimate accountability, just like a Data Owner does for a data asset.

  • A Sous Chef who is second-in-command in the kitchen, assists the Head Chef and oversees day-to-day operations. They are the middleperson who takes that strategic direction and brings it into action often through delegation. They may oversee preparation, adjust the seasoning, organise machinery maintenance, arrange the appropriate garnish and serve-up on the right plate. A Sous Chef makes things happen, think of the Sous Chef as the Lead Data Steward.

  • Last but certainly not least is the Chef de Partie who is in charge of a specific section of the kitchen, such as the grill, pastry, sauces, etc. Think of a Chef de Partie as a Data Steward, who is often right in detail and a Subject Matter Expert (SME) in their own right. They know how to turn that boring pile of flour into a perfect souffle every time.

A culinary giant or a petite bistro

It’s worth calling out that there is often a distinction in terms of operations between a large organisation and a smaller organisation. In a smaller company one person might have several roles, they might be a Data Owner, a Lead Data Steward and a Data Steward. For example, the independent corner café will probably have the same person who owns and manages the running of the café, sources the coffee beans and makes the coffees whereas a Michelin-star restaurant might have someone who specifically cooks carrots to perfection.

Having the right roles for the size and type of kitchen – or business - ensures that those who know the job or data the best, are brought in at the right time for clarity and transparency, efficiency and creating trust through data.

Think about it as avoiding chaos and catastrophe in the kitchen by adopting Data Governance guidance to optimise workflow, ensure quality and minimise mis-stepping or miscommunication.

Just as in that kitchen, data assets without clear accountability and responsibilities will negatively impact the value chain to which we all contribute in one way or another. The proof is in the pudding.

Hyatt Saba

Comment

Guest Blog from Niels Lademark Heegaard - Is Your DG/EA/BPM Approach Mature and How Can You Tell?

It’s a pleasure to feature a guest post from Niels Lademark Heegaard, a talented friend and former colleague from my early days at Platon, the first consultancy I worked with. Niels has an exceptional ability to make complex ideas clear and practical - something I’ve admired since we first worked together.

In this blog, Niels explores how to define, measure, and apply maturity in Data Governance and Enterprise Architecture.

Yet another AI free blog post (except for correcting my spulling and the cover image).

This time around I muse about Data Governance and Enterprise Architecture maturity. How do you answer the age old question: How mature is your Data Governance efforts?

Claimer: A search and replace and you can use this for EA, MDM, BPM, AI, etc.

Maturity…

Maturity is a very frequently aired term that is thrown around in the context of both Data Governance and Enterprise Architecture (and a host of other disciplines). Statements such as “This company is immature,” or “The Data Governance model is immature,” are encountered, but little is said about how this label is defined.

What is maturity anyhow?

So, what is maturity; how do you measure it, and what can you use it for?

Definition: Maturity is the extent of your capabilities within an area.

You need a lot of capabilities to be successful in Data Governance, Enterprise Architecture, Master Data Management, Business Process management etc., etc. Since the disciplines differ the capabilities do as well, however there is a surprising amount of overlap between fields.

Know your measures

Whenever I do a maturity assessment, I categorize these capabilities and subdivide them. One of the maturity indicators for all the mentioned fields is “Governance,” which I’ll use in my example below.

Governance

  1. Strategic recognition

  2. Funding

  3. Organization

  4. Assigned resources

Organization

  1. Placement in the organization

  2. Size and skills

  3. Team education and training

  4. Influence

Business alignment

  1. Business Drivers/Priority management

  2. Business Process alignment

  3. Ownership

  4. Co-development

Methodology

  1. Framework/Policies

  2. Methodologies/Guidelines

  3. Project alignment

  4. Budget process alignment

Technology support

  1. Requirements for IT Application Design

  2. Master Data Archtecuture

  3. Dedicated tooling

  4. Tool coverage/Tool adoption

Change Management

  1. IT project embedding

  2. Continuous discipline development

  3. Education (outside the team)

  4. Communication (outside the team)

Mind you there are other potential subtopics (governance could e.g., include “Audit”). I usually end up with 6-10 different dimensions, but I’ll go with the four under “Governance.”

Six dimensions and a lot of questions

Just to avoid (or add to) the confusion: “Governance” in this context is about how your enterprise’s management approaches your particular discipline, e.g., BPM, EA, etc. It is not about the execution of Data Governance, but about the conditions that are set to perform Data Governance in the first place.

There is a point to this: Always explain what the overarching dimensions cover. It will also help you define the sub-topics.

Be consistent, get it right, and get value

I usually measure each sub discipline individually on a score from one to five. The snag is how do you decide if the score is two or three, consistently? Is it really that important to get it exactly right?

The answer to the latter question is resounding: Yes, it is very important!

I’ll explain why that is later, but I can allude that getting it right will enhance the value of the maturity assessment immensely.

To assign the scores exactly right you need to define them. This is hard work which is why I’ll limit my example to a few examples namely “Strategic Importance” and “Funding”:

Explaining step by step

Strategic recognition

Level 1 (a.k.a. What??) Data governance is not recognized as a value adding discipline in the company. The term is largely unknown, and the benefits are unrecognized (stop wasting my time).

Level 2 (a.k.a. Why?) Data governance is familiar to those that experience low data quality directly, but not as a trans-organizational effort, there is a departmental acknowledgment of the need, and a few understand the discipline and perceive the benefit of a collective effort . One or two projects may occasionally take Data Governance into account (why is this necessary??).

Level 3 (a.k.a. OK…) The term data governance is generally a familiar term and is recognized at the SVP level in some divisions. It is somewhat seen as an enabler of better execution within a division or department. There is a local knowledge of what needs to be done, but also some resistance. Some business plans and most projects take Data Governance into account (too much bureaucracy… will do it if I have to).

Level 4 (a.k.a. Yes!) The term data governance is a familiar term and is recognized at the CxO level. It is seen as a prerequisite for better execution both within and across departments. There is an understanding of what Data Governance entails and acceptance of the actions that need to be taken. All business plans and projects take Data Governance into account or will need explicit permission not to (OK, will go do).

Level 5 (a.k.a. Of course!!) The term data governance is a familiar term and is recognized at the CxO and executive board level. It is seen as the foundation for business execution both within and across departments. Data governance is understood by everyone, and its consequential actions are seen as beneficial. The business strategy and business plans and projects take Data Governance into account or are paused until they do (Darn I forgot… it is top of my list!).

Let’s do funding

Level 1 (a.k.a. Go away (I need to manage these errors!). Data governance is not funded. It is perceived mostly as an unnecessary distraction that lures some untoward employees away from value-adding work. Enthusiastic front runners and local advocates can hope that their manager turns a blind eye.

Level 2 (a.k.a. CAPEX not OPEX (We did it... are you still here?)). A data governance initiative, usually in the form of a project, is funded. The resulting recommendations might result in part-time allocation of some employees (but remember that you also have to deliver on [insert day job]). Funding is based on a yearly allocation in a local department. Project budgets will generally not have funds for taking Data Governance into account.

Level 3 (a.k.a. OPEX (OK, We’ll continue... for now)). A data governance department (or part of a larger department) is funded with a few (too few) people. The funding is given by an SVP or equivalent, subject to unchanged division budgets. The function is centralized and there might or might not be some funded liaison officers in other divisions (with day jobs). There is little in the way of dedicated technology support. Project budgets will sometimes have funds for taking Data Governance into account (especially migration projects).

Level 4 (a.k.a. OPEX (We got it fully covered)). A data governance department (that exists in its own right) is fully funded (usually slightly understaffed). The funding is given by a CxO but as a minor expense it is rarely listed in the annual report. There are funded liaison officers in most divisions. There might be some dedicated technological support (specialist tools and/or one-stop shops for all things DG). Project budgets have funds for taking Data Governance into account.

Level 5 (a.k.a. OPEX and CAPEX (No, we don’t... )). A usually slightly understaffed data governance department is fully funded (not a copy-paste error 😊). The funding is given by a CxO and is listed in the annual report as a strategic effort. There are funded liaison officers in all divisions. There is dedicated technological support (specialist tools and one-stop shops for all things DG). Project budgets have funds for taking Data Governance into account. There are ongoing project(s) to develop DG further.

Benefits

I stated that it is essential to define and by extension, measure maturity precisely and consistently. Given 24 questions or twice that it is a lot of work, however there are three good reasons.

The first reason: If you wish to report on the progress of your Data Governance efforts, you will want to score the same parameters next year. This should be consistent and use the same measuring stick. On a related note, I only give a score if ALL the parameters for that score are met. Your mileage may vary.

The second reason: This maturity assessment is an action plan in disguise. It tells you what to improve if you wish to take your capabilities up a notch. On another related note (and this is important), level five is NOT necessarily where you want to be. It depends on your organization’s needs. Those needs might not call for a level five capability.

The third reason: You can also set the goals using the same scoring (in the same sessions). Where do you want to be? Voilà, you just made half a gab analysis. List up what it would take to close the gap, and you’ve made a full gap analysis.

Do it together and initialize change, do it alone and stay alone

A word of caution, do not attempt to measure the maturity without discussing the scores with relevant stakeholders. You could base it on a pain point analysis, workshops, or other ways of dialogue. Be prepared to explain the nuances. Once the hand wringing is done ask: Where do you want to be?

Show then what they have got

Final note. This is easy to communicate. Both the current and the target maturity can be shown in one easy to understand diagram. This is appealing to most people (or appalling depending on the gap). Use a spiderweb diagram with the main topics as the spokes. Plot in the as-is and the... optimistic ambition.


Niels started his career as a master of agriculture, but soon realized his mistake and changed to the IT industry. Niels started working with data governance in 1997, before the term was coined. In the summer 1997 he became master data manager, responsible for collecting and reporting the total research and conveyance of science done at the University of Copenhagen, from papers to museum exhibitions in one unambiguous format.

After a tenure at the Danish State Railways as information and enterprise architect, he joined a dedicated information management consultancy and later Deloitte by merger. The project tally as information management consultant ended at 28. Currently, he is working as the enterprise architect in a small company that calculates the electric grid capacity across Scandinavia.

Comment