Exploring Trust in SaaS AI Features
January 2024
Background
As generative AI features became more common in B2B SaaS applications, our Segment observability team wanted to learn more about what it meant for customers to trust AI. Most importantly, the team wanted to know what types of observability and security features we should build to ensure customers would trust our generative AI features.
Methods
To gather information on customers’ in-depth experiences, I conducted 17 interviews across multiple AI user types (buyers, implementers, and end users), and varying company sizes.
During the interviews, we discussed participants’ current AI uses and perceptions of AI and gathered feedback on Twilio’s AI Nutrition Facts label.
To make the discussion of AI trust more concrete, I developed a list of AI use cases and had participants rank the use cases from most to least risky, and we also evaluated a specific generative AI feature that was currently in development.
Outcomes
This research identified requirements for AI trust where Segment was already meeting expectations, as well as areas where Segment could innovate in order to become a market leader. One such feature was goal tracking, which we were already on track to implement in our Predictions product.
The findings from this research were also used to create a “Designing for AI” Handbook to help product managers and designers design and evaluate AI features ethically.
Want more? Get in touch to see my full résumé and portfolio.