"Nutrition labels" aim to boost trust in AI

Source: 
Author: 
Coverage Type: 

As adoption of generative AI grows, providers are hoping that greater transparency about how they do and don't use customers' data will increase those clients' trust in the technology. There's a mad scramble to add AI features across the board in the software world — but worries about privacy and security are prompting some businesses to discourage employees from using the new features. Twilio, which helps businesses automate communications with their customers, announced it will place "nutrition labels" on the AI services it offers those businesses, clearly outlining how their data will be used. The labels report what AI models Twilio is using, whether those models are being trained on customer data, whether features are optional and whether there is a "human in the loop." A "privacy ladder" distinguishes between company data that is used only for customers' internal projects and data that is also being used to train models used by other customers, as well as whether personally identifiable information is included in the data. In addition to offering such labels with its own data collection, Twilio is providing an online tool that other companies can use to generate similar AI nutrition labels for their own products. Additionally, Salesforce is unveiling an acceptable use policy that governs what companies can and can't do with its generative AI technologies. While there is still an air of excitement around the potential of generative AI to improve productivity, many companies have been taking a cautious approach, warning employees not to put company data into tools like ChatGPT. Transparency is key to increasing trust, both Salesforce and Twilio say. 


"Nutrition labels" aim to boost trust in AI