Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Why AI Fairness Is Important in Telecom Personalization
Latest

Why AI Fairness Is Important in Telecom Personalization

Last Updated on October 22, 2022 by Editorial Team

Author(s): Arslan Shahid

Originally published on Towards AI the World’s Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses.

Personalization is the name of the game in the telecomΒ industry

Image fromΒ Pexel

For the past two years, I have been working in the personalization and contextual marketing department of one of the major mobile operators in Pakistan. For people who don’t know personalization is designing products, recommendations, and ads for individuals based on their attributes and preferences.

Telecommunications is an oligopoly market with little to no product differentiation. Mobile data, minutes, and SMS can be thought of as commodities. One way to add value for customers and businesses is to construct personalized products for groups or individuals. Personalization is achieved by using tools such as machine learning and statistical analysis. However, as I will explain in this post, if not checked for, personalization could have adverse effects on individuals and society atΒ large.

What do you mean by β€˜AI Fairness’?

Contrary to what some people believe, AI or statistical models are not free from biases or making discriminatory predictions or recommendations. All models represent statistical approximations of the data based on a set of variables we, as modelers, think are predictive of the thing we are interested in. By choosing the attributes that the modeler thinks are predictive and measuring data that they think is appropriate, a modeler often makes a choice that reflects their beliefs. Furthermore, the data itself could reflect a historic privilege or discrimination being done against a group ofΒ people.

AI fairness is the study of how algorithms treat groups of people. Attempting to make them less predatory or discriminatory against a set of protected groups or set attributes like gender, ethnicity, country of origin, and illnesses. Provided that we believe that being part of a certain group does not or should not be a basis for an algorithm to predict or make a decision that results in an adverse outcome. For example, a credit scoring algorithm assigning a lower credit score to a black person when compared to a white person with β€˜similar’ attributes is β€˜unfair’. The quotation marks signify that this depends on the definitions of similar and fairness.

How is group fairness defined mathematically?

Although there are plenty of ways to define fairness as a mathematical construct, below are a few of the most common, taken from CS 294: Fairness in Machine Learning, UC Berkeley, FallΒ 2017

Fairness in Classification:

Demographic Parity:

Image from fairmlclass

Simply explained, demographic parity means that the probability a certain classification algorithm predicts the true(C=1) class is the same when an individual is from group A=0 or group A=1. Where A could represent any arbitrary type of protected group, such asΒ gender.

Accuracy Parity:

Image from fairmlclass

Intuitively, a classifier is accuracy parity-wise fair when it assigns all classes (represented by Y) with the same probability when a person has attribute a=0 or a=1. For example, the university admissions algorithm accepts, rejects, or waitlists, each with the same probability for a male orΒ female.

Precision Parity

Image from fairmlclass

A classifier is precision-parity fair when the probability that an individual is from the true class, given that the classifier predicted that they are from the true class; is the same for an individual with attribute A=0 or A=1. Take the example of an algorithm that decides to predict a rare disease. Doctors are only allowed to give medicine to patients predicted to have the disease. If we want the algorithm to be precision-parity-wise fair, the proportion of people who had the disease and were predicted to have it should be the same across all protected groups.

There are plenty of novel and use-case-specific definitions of fairness for classification problems. The above mention definitions include some of the most well-known and activelyΒ used.

Fairness in Regression:

Statistical Parity:

image from Microsoft Research

For a regression problem, the modeler is trying to minimize the expected loss between the distribution of observed values (Y) and the distribution of predicted values (f(x)). For statistical parity to hold we minimize the loss subject to the constraint that the CDF conditioned on protected attribute A does not deviate from the unconditional CDF by a threshold epsilon.

Bounded GroupΒ Loss:

Image from Microsoft Research

Bounded group loss means that for every protected attribute a, the loss function is below a certain threshold. For example, we could require a regression to predict house prices have at least an RMSE of $2500 for all protected groups likeΒ ethnicity.

What do you mean by personalization inΒ Telecom?

Considering GSM services are commodities, product differentiation is achieved in telecom using two broad categories:

  1. Price differentiation: You give services to customers at a different price point than immediate competitors. If your services are cheaper & every network has the same service quality, you are likely to gain marketΒ share.
  2. Network differentiation: A segment of customers will always be willing to pay a higher price for better quality. In telecom, quality is solely derived from spectrum allocation and network presence in the area. For example, each telecom operator in Pakistan has marked its territory where they provide the best services. Usually, it makes the most sense to buy services from the operator with the highest coverage in theΒ area.

Personalization can help Telecos achieve product differentiation either through price or network on an individual level. For example, you can bundle together different GSM products like you can give a customer who is more inclined to use data but doesn’t use call or SMS as much an β€˜averaged’ bundled price where their data is subsidized but you charge more forΒ voice.

How is personalization achieved?

The following are some of the techniques and methods used in the telecom industry to enable personalization (not exhaustive).

  1. Dynamic pricing: GSM services are bundled according to how much an individual customer may be willing to pay for them. Differentiated pricing could enable customers to get a product offering based on their specific needs andΒ budget.

2. Product Recommendation: Recommendation engines are built to give existing pre-packaged products to customers

3. Discounting: Personalized discounts for existing telecom products or services. Usually based on a metric that captures the amount of additional value a customer can bring if we give them a discount.

4. Geo-location / demographic pricing: For most operators, telecom services are not the same for every locality, and network quality is different due to the amount of infrastructure and users (more users per infrastructure means lower quality services). It makes sense to use your service quality as a means to charge more for customers who areΒ willing.

5. Network Personalization: Network personalization service by Vodafone is a good example. This includes all such network tweaking or services according to the preferences of the customer.

Relating Telecom Personalization with unfairness

Potential fairness violations when incorporating personalization.

  1. Discriminatory pricing: Standard approaches in dynamic pricing techniques give a user bundled package based on their usage, but if a protected group resides in an area or locality where networks are already congested, they might face price discrimination. Since networks are congested, they might use less data or voice, which would make dynamic pricing algorithms recommend expensive bundles to such users. Under-developed regions in third-world countries have less infrastructure per user than developed urban centers, making them susceptible to receiving worse quality services for a higherΒ price.
  2. Adverse Product Recommendations: In Telecom, personalized recommendation systems use prior purchase history, frequency, and monetary value as key metrics to create recommendations. They might also include web data, friends & family circle, and other miscellaneous factors while creating recommendations. Some products in a portfolio might be considered β€˜predatory’ as they might give a suboptimal amount of resources to customers; there might be other products that provide higher value to the customer for lessΒ money.
  3. Preferential Network Services: Network personalization often uses a customer’s spending habits and their satisfaction with services to give them preferential treatment on the network. Some ethnic groups are historically disadvantaged and less economically prosperous. A network personalization mechanism can give good services to already well-off groups and discriminates against less economically prosperous groups.

Conclusion

Fairness in statistical models and AI systems is becoming a concern for every industry and use case. Telecom personalization is one huge way to add value for businesses and customers. Access to telecom services is a prerequisite for economic prosperity, as many services need a reliable internet connection or a mobile phone. People working to enable telecom personalization need to make their modelsΒ β€˜fair’.


Why AI Fairness Is Important in Telecom Personalization was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓