Ethical AI is Rooted in Incentive Alignment

Ethical AI is Rooted in Incentive Alignment

Have you ever hesitated before running a Google search, weighing the pros of getting a quick answer against being chased by ads on the topic for the next month? Have you questioned Yelp search results, wondering why a restaurant where you had a bad experience is showing up at the top? 

We all have. These issues stem from a rarely discussed, but crucial aspect of AI: incentive alignment. Simply put, the beneficiary of AI is usually the entity whose incentives align most closely with the company behind the AI. When the customer's and company's incentives mostly match, AI can boost both customer experience and profitability. However, if a third party is the source of the company’s revenue, AI may exacerbate this misalignment, harming one party while benefiting the other.

Consider Yelp's search results, an AI-powered feature at the core of the business. How does Yelp make money? Not from you, but from restaurants seeking demand. Therefore, as Yelp's AI grows more sophisticated, it will do a better job bringing additional customers to the highest bidders. The end impact on the consumer can often be a mediocre restaurant showing up right up top. We all have seen this happen firsthand; it’s not an accident. The beneficiary of AI improvement isn’t the consumer; it’s the third party. 

No alt text provided for this image

Contrast Yelp with Apple's Siri. Most of Apple's sales come from hardware. When you ask Siri to locate a restaurant, Apple's primary aim is to ensure you love your iPhone enough to pay a thousand bucks for the next version. As Apple's AI advances, it'll directly benefit consumers—assuming Apple's fledgling ad business doesn't overshadow it. 

Short-term versus Long-Term Incentive Alignment

Facebook might seem like an obvious example of the same misalignment: they’re making money from ads, right? So they surely have a similar misalignment of incentives with the consumer built into the core business, right? Aren't they more like Yelp than Apple? Not necessarily.

Facebook's triumph in the social media wars can be attributed to its strategic focus on user engagement rather than immediate profitability (that is, rather than advertising efficiency). This focus meant short-term pain for advertisers but cemented user loyalty and catapulted Facebook into a social media titan. Engagement - likes, shares, comments, time on site - all added up to long-term consumer retention. Or, dare I say, consumer addictiveness. 

As we now know, polarizing, emotionally charged content is most engaging: it delivers a dopamine hit. And yet, like with sugary drinks and nicotine, giving the user a hit isn’t good for them in the long run. Eventually, they realize most of the time they spend on Facebook is best described by a WebMD article titled “doom scrolling.” They understand that Facebook puts them into an “information bubble” that feels cozy but makes them actively hate their neighbor for their slightly different beliefs.

That is, while Facebook is perfectly aligned with the short-term interests of their users, this focus on engagement is counter to their long-term interests.  This isn't the fault of the brilliant AI operating Facebook’s newsfeed ranker. It stems from a deep-seated incentive misalignment between the social network and the long-term well-being of its users.

No alt text provided for this image

Business Models With Aligned Incentives

Is Apple alone in its business model? Are most AI applications out there subject to this dreadful misalignment of incentives, either in the short or long run? Absolutely not; let’s look at some examples. 

Spotify's Discover Weekly and personalized playlists are powered by AI. They are famous for their recommendation system, which uses customers' listening habits to suggest new music. This is a win-win situation. Users discover new music they love, increasing their engagement with the platform, and Spotify benefits from increased user activity and retention.

Zoom has an AI-powered transcription service that provides real-time subtitles during video calls, making meetings more accessible for users with hearing impairments and limited language fluency. It’s easy to see how evolving this AI offering to automatically summarize meetings, give meeting participants notes and pointers on how to be more effective, are the next steps. The result benefits both Zoom and their users from increased user satisfaction and engagement.

Duolingo's AI-powered language learning tools adapt to the user's learning style. The app adjusts the difficulty level and type of exercises based on the user's performance. This personalized learning experience enhances the user's ability to learn a new language, directly aligning with Duolingo's profitability and leading to higher user retention.

Are there pitfalls in each of these businesses? Of course. For example, music producers negotiate with Spotify to get additional exposure for their hits, beyond what the algorithm would normally provide. Zoom recently crossed into “the land of creepy AI” with privacy invasiveness. These companies aren’t immune from incentive misalignment on the margins; however, the heft of their core business - and its alignment with users’ interests - inoculates them from going too far toward the dark side.

How We Think About Aligning Incentives At Hungryroot

At Hungryroot, the company where I work, we've built our business around personalization: AI directly drives two-thirds of our sales. We use AI to pre-fill customer carts with groceries tailored to their dietary preferences and long-term health goals. The key metric our AI optimizes for is retention, ensuring that our interests align with those of our customers. 

When customers know that the AI is working in their favor, they feel more comfortable sharing personal information, creating a mutually beneficial relationship. Our AI could not exist without 100+ explicit, zero-party data points that our customers specifically choose to share with us. These range from preferred cuisines and kids’ ages, to kitchen appliances at the users’ disposal and users’ favorite proteins. 

Our data collection doesn't end at signup. Each time we deliver a box, we ask customers for feedback. While some might see this as introducing friction into the user experience, we see it as a key enabler to do a better job for our customer. It's a trust-based exchange — the customer understands that by providing us with this information, they're helping us improve their experience next time. Their feedback directly influences the output of our AI system, making the service more tailored to their preferences and needs.

If the customer was unsure about how we will put this data to use, would they ever give us their data? Would they tell us that they’re trying to lose weight, or that their children are picky eaters? Of course not. They wouldn’t give this kind of information to a standard grocer; the trust and relationship expectation is different. The “agent” relationship that we have with our customers - that we work to earn every day - is key to how our AI has a key competitive advantage of more and better data. 

That agent relationship, in turn, is all about the consumer trusting that their incentives are aligned with ours.

Ethical AI is About Incentive Alignment

In conversations about AI ethics, it’s crucial to understand who the AI is optimizing for first. Once that’s done, it becomes easier to answer questions that are typically at the center for AI ethics: privacy, transparency, and bias. 

Incentive alignment doesn’t eliminate the agency problem, but it does reduce it. A real estate agent that’s working on a flat fee is less likely to push you to buy a more expensive house than the one that’s working on commission. An insurance company that’s trying to reduce their payouts will push you to get yearly check-ups. And an AI acting on behalf of a company that wants you to keep coming back will optimize for things that are good for you - not a third party.


Photo Credits: thanks to Comedy Wildlife Photo and Demotivators for their amazing photos and humor.



To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics