• wblogo
  • wblogo
  • wblogo

Augmenting Human Expertise: Thinking Around AI, Machine Learning

Davide Zilli, March 13, 2020

articleimage

A proponent of AI and machine learning and its place in wealth planning, among other uses, argues that the human factor keeps development honest and humble. Experts must grasp the way in which algorithms work and should be able to monitor and validate them easily to fight bias and build trust.

AI and machine learning (ML) applications have been at the center of several recent high-profile controversies, among them biases revealed in how Apple applied credit limits to its card users, and similar biases exposed in Amazon's recruitment methods. To explore where the fear, uncertainty and doubt (FUD) factor lies in adopting wider use of AI and where it can go wrong, Davide Zilli, client services director at Mind Foundry, an AI and ML developer and research firm spun out of the University of Oxford, explains why transparency and explainability will be vital in winning the fight against biased algorithms that new design regulation is expecting to address. The author is based in the UK but given the global nature of the topic, we hope readers in North America find this of value. 

Importantly, he explains why businesses in this field must set up dedicated education on machine learning, including modules on ethics and bias explaining how users can identify and in turn tackle or outright avoid the dangers.

The editors are pleased to share the views of outside contributors where the usual editorial disclaimers apply. If you would like to add your thoughts on this topic email  tom.burroughes@wealthbriefing.com and jackie.bennion@clearviewpublishing.com.

Today in so many industries, from manufacturing and life sciences to financial services and retail, we rely on algorithms to conduct large-scale machine learning analysis. They are hugely effective for problem-solving and beneficial for augmenting human expertise within an organization. But they are now under the spotlight for many reasons - and regulation is on the horizon, with Gartner projecting four of the G7 countries will establish dedicated associations to oversee AI and ML design by 2023. It remains vital that we understand their reasoning and decision-making process at every step.

Human experts need to understand the way in which algorithms work and should be able to monitor and validate them easily. "Black box" machine-learning tools should cease to be unexplainable in order to eliminate the easy excuse of “the algorithm made me do it”.

The need to put bias in its place
Bias can be introduced into the machine learning process as early as the initial data upload and review stages. There are hundreds of parameters to take into consideration during data preparation, so it can often be difficult to strike a balance between removing bias and retaining useful data.

Gender for example might be a useful parameter when looking to identify specific disease risks or health threats, but using gender in many other scenarios is completely unacceptable if it risks introducing bias and, in turn, discrimination. Machine learning models will inevitably exploit any parameters - such as gender - in data sets they have access to, so it is vital for users to understand the steps taken for a model to reach a specific conclusion.

Lifting the curtain on machine learning
Removing the complexity of the data science procedure will help users discover and address bias faster – and better understand the expected accuracy and outcomes of deploying a particular model.

Machine learning tools with built-in explainability allow users to demonstrate the reasoning behind applying ML to a tackle a specific problem, and ultimately justify the outcome. First steps towards this explainability would be features in the ML tool to enable the visual inspection of data – with the platform alerting users to potential bias during preparation – and metrics on model accuracy and health, including the ability to visualise what the model is doing.

Beyond this, ML platforms can take transparency further by introducing full user visibility, tracking each step through a consistent audit trail. This records how and when data sets have been imported, prepared and manipulated during the data science process. It also helps ensure that compliance with national and industry regulations – such as the European Union’s GDPR "right to explanation" clause - and helps demonstrate transparency to consumers effectively.




Latest Comment and Analysis

Latest News