XAI key for data understanding

by | Oct 1, 2020

XAI key for data understanding

by | Oct 1, 2020 | Blog | 0 comments

Andreas Bartsch, Head of Service Delivery at PBT Group

According to an Accenture report, responsible artificial intelligence (AI) provides a framework for building trust in the AI solutions of an organisation. It is defined as the practice of designing, developing, and deploying AI with good intention to empower employees and businesses, and fairly impact customers and society. In turn, this allows companies to stimulate trust and scale AI with confidence.

With the technology starting to become commonplace, more organisations around the world are seeing the need to adopt responsible AI. For example, Microsoft relies on an AI, Ethics, and Effects in Engineering and Research (Aether) Committee to advise its leadership on the challenges and opportunities presented by AI innovations. Some of the elements the committee examines is how fairly AI systems treat people, the reliability and safety of AI systems, how AI systems empower employees to engage with one other, and how understandable AI systems are.

We examined the understandability or explainability of AI last month, so you can revisit it here. Suffice it to say, this points to how responsible AI is built around human-centred design. It considers the individual, the governance of AI, training, and data monitoring, amongst others. So, regardless the type of AI used, it must be understood and provide unbiased insights. Overall, AI must be deployed responsibly and complement the human factor. Contrary to popular opinion, AI is not out to replace people but should rather enable them to deliver more strategic value to the organisation.

Providing structure

The governance framework provided in a responsible AI environment must be done in collaboration with colleagues to avoid having people feel intimidated by the technology. If they understand how it processes information and the structure of the framework, then they would be more open to using the technology in their work environment.

Of course, it is critical that AI models are based on accurate, complete, and on-time data. An organisation must therefore get its ‘data house’ in order before adopting any AI technology. This will help ensure the modelling and subsequent processes are based on accurate information. Furthermore, because data migration projects take place between cloud environments or from on-premise to hosted solutions, the data quality, governance, and engineering must be kept in mind to deliver business value.

Fundamentally, data principles must be in place to support AI in whichever form it might take. Therefore, the data quality and engineering will be foundational to help position AI and other technologies to optimise and train the modelling scope correctly. Throughout this, it must be done with the appropriate sensitivities around the human aspects.

Responsible AI, while seated in technology, is more about understanding how to establish AI-driven processes that are mindful of human resources and the quality of data used.

Categories

Archives

Related Articles