Are You Propagating Poor Quality Data?

Posted on Categories UncategorizedTags , ,

Data quality has been an issue right from the very beginning of the IT industry. With the exponential spread over the last decades of file-based and database systems, System Administrators have wrangled with the issue of data quality, duplication and consistency.  Inaccuracies in data can occur for a number of reasons, primarily though it is often blamed on manual data entry by…. employees.  Within the context of the PLM toolset, this is typically staff entering insufficient or incorrect attribute data when checking in CAD models, creating BOM’s, raising Engineering Change or generating supporting information for PLM objects.  Other reasons for poor data quality can often be traced back to data migration or consolidation exercises.  Even with the best error checking scripts, data migration projects often lead to fields not mapping correctly, especially where manufacturing companies have used multiple different part identification systems over the years.  Corporate mergers and acquisitions along with the obsolescence of CAD and PLM software, have forced most manufacturing companies to undertake data migration exercises.

The manufacturing industry is undergoing a revolution, primarily from the opportunities created from the connectedness of the Internet. The ability to collect and analyse more data in an efficient method has allowed leaders to better understand the requirements of their customers.  You will have heard the claims that “Data is the new Oil” or that “Data has surpassed Oil as the most valuable resource”, without doubt, data has become the new global commodity. However for data to provide more value than oil, it needs to feed the emerging technologies such as; Digital Twin, IOT and Artificial Intelligence with the correct data.  Poor quality data often leads to incorrect interpretation or analysis, which directly results in bad decision making hitting the operational effectiveness.  The impact on businesses of inconsistent data leads to inefficiencies across the enterprise resulting in wasted time, lost productivity and customer satisfaction.

To combat poor data quality, leading organisations are investing in tools and techniques to define the taxonomy or classification of their data assets. Unfortunately even defined taxonomy standards such as UNSPC or DIN 4000-1, do not provide the attribute structure and therefore the addition of metadata fields and attributes is done on a per organisation basis.  With the implementation of a classification structure and attribute fields, users are able to more effectively use the inbuilt search and discovery tools provided by the PLM Vendors.  Taxonomy is a huge subject of frustration, as organisations have a period of trial and error before finding a data model that provides an appropriate level to support the business needs. Many consultancies offer bespoke services to evaluate, clean or migrate data to ensure consistent quality.  There are also some innovative tools available to support identification and analysis, such as tools to index the shape of 3D CAD files and identify similar or duplicates.

These PLM data models, classification structures and attribute fields were borne out of the need to standardise enterprise manufacturing information and provide workflows for repeatable standard processes. Let’s go back to the most commonly identified area of poor data quality, user data entry.  The entry by a user will typically be correct if they have been educated on all the options for the attribute field, the value to the organisation on why this field is required and the consequences to the company of data quality if it is poor.  Unfortunately, it is all too common for the end-user to only complete the mandatory fields required and even then, not be engaged with ensuring the fields are correct.  Often corporate data guidelines either do not exist or are not kept up to date, leading to users ignoring the guidance as irrelevant.

Unfortunately, I have seen in too many organisations situations where the end-users have become very resentful over the number of mandatory fields required, resulting in some behaviours bordering on sabotage, where incorrect inputs are purposely selected to “make a point”.  They are often frustrated by the inability of the PLM administrators to keep the PLM data model and attribute values relevant to the changes of projects, programmes and personnel.  Now more than ever, PLM leaders need to keep the data and attribute models focused and appropriate ensuring the balance of usability versus information requirements. Try a lighter approach to mandatory attributes in the creation and ideation phase, but strategically introduce more as the designs or documents gain more maturity. Leaders should also continue providing ongoing education and engagement with users over why the attribute information is important to all areas of the business and therefore the continued success of the organisation.  As a PLM manager, are you responsible for propagating poor data quality through the system by not being responsive to your customers (end-users) needs?

At Product Innovation a CIO-led learning community for manufacturers, over several years we have captured through interviews and recorded presentations, how leading organisations have understood the importance of data analytics and actionable intelligence to drive business adoption. This content is available through our PI membership community, with curated content ensuring that both new and existing manufacturers tackling the challenge will gain insights into effective strategy, implementation and the inevitable pitfalls that exist in getting there.

Paul Empringham, VP Research, PI – a CIO-led learning community for manufacturers.


Leave a Reply

Your email address will not be published. Required fields are marked *