Jim Barker is a professional in the IT space that has had varied roles starting out as a mainframe programmer, leading data initiatives at Honeywell, and moved on to a product management leadership position for Winshuttle with a focus on data improvement in early 2015. Jim wore out many pairs of shoes as a front-end screen developer prior to PC’s and the Web, grew up with ETL, BI, and Statistical tools, and led many SAP deployment projects, and introduced and evangelized data governance and ‘data as an asset’ at Honeywell. Key highlights of his career include architecting the data warehouse solution at Thomson Legal and Regulatory including a very early start in Big Data in 2001 and Data Governance in 2003, developed the Velocity Data Migration Methodology at Informatica in 2005, and leading data for SAP Site Deployments with Honeywell at over 600 sites in 45 countries..
How long have you been working in Data Governance?
I have been working in and around data governance and data quality since 1994 as part of data warehousing, data integration, and data migration initiatives.
How did you start working in Data Governance?
I was first introduced to data quality at a health insurance firm in 1994 when I used Code1 (Method 1) on the mainframe to correct subscriber and clinic data. This included building improved interfaces with CICS to prevent bad data from being set up in claims systems.
What where your initial thoughts when you first fully understood what you had got into?
I was younger then, and my first thoughts were why is this so difficult? Why can’t folks just set up data right in the first place...that has changed over time but those were my first thoughts. When I first had leadership responsibilities it became more practical, how can we come up with solutions, techniques, and tools to help correct problem data and also stop it in the future.
Are there any particular resources that you found useful support when you were starting out?
The first real governance solutions I worked with were at Thomson, we leveraged a lot of business objects capability along with First Logic data quality tools...but since no-one was really writing about data quality or data governance at that point, the foundational project management techniques provided by the project management institute (PMI) were what our team used to build out our governance capabilities...
Later, I found some of the information being published by former data warehousing gurus to be helpful, people like Larry English.
As well as working in Data Governance, you’ve also been pursuing a PHD in the discipline. How have you found that?
It is interesting to see how much academics dislike the communication style of consulting and IT leaders. I found that I had to spend much more time writing any coursework to meet style expectations.
The most valuable and enjoyable part of the activity was learning different ways of collecting information through varied qualitative study methods, and using those to talk with other professionals in the realm of data governance.
What do you hope to do with the results of your PhD dissertation?
Once I complete my defense I would like to use my findings in a couple of ways, publish some summary articles, and would like to expand to write a book on lean data management.
What is the biggest Data Governance challenge you have faced so far?
I think the biggest challenge is how to meet the divergent challenges of new product introduction timescales with the need to have data set up correctly. Folks working in business functions (not data stewards) can get very focused on cycle time reduction and sometimes lose the ability to see the forest thru the trees; they are so worried about getting a product set-up they don’t want to take the time to get the data right and are willing to pay the price in the long-run. This is common across CPG, Manufacturing, Finance, Information, and Defense organizations...with finance firms most interested in getting it right in the first place.
What have you implemented or solved so far that you are particularly proud of?
Two things come to mind. We built a global team at Honeywell that built out a set of data quality scorecards that helped to expedite the data migration process so with a relatively small team we were doing 10 or more SAP deployments at once and putting focus on time but also on getting the data right. This is something that we received an innovation award from Informatica on for efficiency in M&A.
Second, is building out the data migration methodology at Informatica that many software firms are now using as the standard for what I would call agile+ (the plus being data governance).
What single piece of advice would you give someone just starting out in Data Governance?
Keep your eyes open. Don’t go blindly into a tools-only solution, every time you read something or hear something from an expert...figure out what it means in your organization and customize a solution that uses perspectives from many different sources of information...i.e. avoid groupthink.
Finally, what do you wish you had known or done differently when you were just starting out in Data Governance?
I think I would have put more emphasis on process mapping and focus more on policies and procedures. It is much easier to deal with tools, it is more difficult to understand the process and correct it with the aid of tools to make a difference.
I also wish I would have really understood that you can’t do it alone, you need to have a collection of folks working in concert. People are a huge part of data governance efforts and you need to find ways to get folks in all parts of the organization to embrace the importance of having the data right and go down the data governance path with you.
Having read my interview with Jim you can also read my free report which reveals why companies struggle to successfully implement data governance.