Fostering Trust on ML Inferences

<p>The Machine Learning teams at Workday have a tremendous responsibility to develop reliable AI and ML. Building ever more trustworthy ML inferences is a path to either increase the value of our products (i.e., increased trust in the results) and to engage in conversations with customers. In this article we examine the dynamic of trust between a service provider (Workday/Trustor) and service users (Customers/Trustees). Trustors are required to be&nbsp;<em>trusting</em>&nbsp;and&nbsp;<em>trustworthy</em>, whereas trustees need not be&nbsp;<em>trusting</em>&nbsp;nor&nbsp;<em>trustworthy</em>. The challenge for trustors is to provide services that are good enough to make a trustee increase their level of trust above a minimum threshold for: 1- doing business together; 2- continuation of service.</p> <h1>Introduction</h1> <p>The paradigm explored in this article assumes that trust is built by an initial altruistic act by the trustor, signaling that the actor is trustworthy. More specifically, Workday&rsquo;s altruistic act would be to invest in building a product and offer it to customers with the promise that it will generate value to them; more value than what is paid in return for the service. The trustor decides how much to invest, and the trustee decides whether to reciprocate and give continuity tothe business relationship.</p> <p>The objective is to make them [customers] trusting &mdash; above a minimum threshold&nbsp;<em>T</em>&nbsp;&mdash; as to engage in the Trust Game [1]. These games are extensions built on top of Game Theory [2]. Furthermore, trust has a temporal element to it. Once established, there are no guarantees that there will be a continuation; therefore this is an extensive form of the interactions, where both actors collaborate and observe each other, reacting to historical actions from one another.</p> <p><a href="https://medium.com/workday-engineering/fostering-trust-on-ml-inferences-afadddadf5da">Read More</a></p>
Tags: Fostering ML