Project Team Contact Us Links

An automatic system to choose which de-biasing methods and datasets should be used to promote Gender Equality in AI for Good

Description of the project

Summary:

Today, datasets are numerous and are used in different systems from which the algorithm will learn. However, the quality of these datasets are not always good, and as it has been demonstrated already show some bias, especially towards women, but also towards minorities and vulnerable groups.

The datasets are basically build on a white Caucasian man reference; they are not representative of the large population nor are they appropriate in terms of diversity and other bias, notably based inter alia on race, ethnicity, or disabilities are recurrent among the world’s populations and especially among developers and machine learning experts. This biased spirit of the tech and developer’s community reflects in the quality of the datasets and algorithms which are the essential food for AI. Therefore, the human developed models are source of discrimination and gender discrimination in particular.

The gender bias natural human intelligence creates a biased machine intelligence and consequently the machines perpetuate discrimination in our society from the use and exploitations of the AI systems. AI is in its very early stages of development and expansion, we must ensure in a timely manner the machines do not become agents of perpetuating gender based discrimination and possibly for the machines to become outstanding allies in achieving gender equity.

Now this is a crazy idea! How can we teach to the machines to be gender neutral if their architects and the creators of the datasets and algorithms are themselves biased? Moreover, how can we develop gender neutral systems which come from developers which for most part of them have no consciences about their own gender bias? One would say this is impossible, but what if we propose a solution that was conceived as a mutual learning process from the machines to the human and vice versa. The solution we present with our project could have a huge positive impact on gender equity and sustainable development. Firstly, we offer a way to avoid further discrimination as consequences of the biased dataset which train the machines learning models and in the other hand we can improve simultaneously the gender equity knowledge by bringing to conscience of the Tech community and developers their existing hidden bias.

The model we propose implies for the interaction to go both ways, the machines and humans both improving their knowledge in service of gender equity. In our view that is the way to go about if we want to provide durable and sustainable solutions in this field.
The aim of our project is to develop ontologies which would check the quality of the datasets against key indicators of gender bias, and consequently offer solutions extracted from the latest research in this area to unbiased the dataset or mitigate it.

Visual project

Specific problem addressed within the theme above

Today, computer scientists, software engineers and other programmers can find open access datasets to train their machine learning models. Others will select already trained models, such as those available in Bert or FastText. Although these advances are significant and promising for the future of different areas where AI and Machine Learning could be used, research has shown that many of the developed models suffer from gender and racial bias, often because the datasets begin with samples from white males or the perspective and assumptions of white males. Examples include:

Gathering datasets and annotated data is a tedious process, and can be very costly. By consequence, the tendency is to re-use the same dataset to train new models, without necessarily looking at the data itself and problems with the dataset. For example, missing certain classes from a gender and race perspective or underrepresenting them in a dataset will not represent lived reality and may ultimately teach the algorithm to build bias into its outcomes. This is the case for example, of the voice recognition, where the models have been trained with male voice more than female voice.

Even where a model has been trained, historic bias in language can cause the problem of gender bias to persist or to be incorporated in even more hidden ways. For example, the use of trained models seems to ignore years of research into the embedding of gender stereotypes in language [5]. In Bolukbasi et al [6], authors show that when the correct inference was made between Man->King, Woman-> Queen, it was not working anymore for other inferences such as man -> computer programmer and woman -> homemaker. In Manizini et al [7], authors demonstrate that with the word embedding trained on the Reddit dataset, it shows very stereotypical religious and racist inferences such as Jew->greedy, Christian->familial, Muslim->uneducated or black ->homeless, and Caucasian->hillibiliy.

The discovery of new bias in datasets, algorithms and trained models are made continuously and the list of the published research demonstrating these issues grows.
The work done to test if a dataset contains bias, or to mitigate the bias in the datasets and the algorithm is also growing. However, even if the effort from a part of the community is important, the knowledge to the results of these studies is still limited to a small community which is often the same that did lead the research of this issue, building a vicious circle. In addition, it is not necessarily the lack of access to the information as often these research have been published in Open Access, but more the lack of knowledge of this information and the spreading of the information that could become barriers to the implementation of the results of these works in real world systems.
In this proposal, we are offering not only to define some standards of what could be potentially a bias, but we offer a system that will group in the same place the different results from these research.

[1] Tatman, Rachael. "Gender and dialect bias in YouTube’s automatic captions." In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, pp. 53-59. 2017
[2] Dastin, Jeffrey. "Amazon scraps secret AI recruiting tool that showed bias against women." San Francisco, CA: Reuters. Retrieved on October 9 (2018): 2018.
[3] Buolamwini, Joy, and Timnit Gebru. "Gender shades: Intersectional accuracy disparities in commercial gender classification." In Conference on fairness, accountability and transparency, pp. 77-91. 2018 [4] Wenger, Nanette K. "Women and coronary heart disease: a century after Herrick: understudied, underdiagnosed, and undertreated." Circulation 126, no. 5 (2012): 604-611
[5] Leavy, S. (2018, May). Gender bias in artificial intelligence: The need for diversity and gender theory in machine learning. In Proceedings of the 1st international workshop on gender equality in software engineering (pp. 14-16).
[6] Bolukbasi, T., Chang, K. W., Zou, J. Y., Saligrama, V., & Kalai, A. T. (2016). Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Advances in neural information processing systems (pp. 4349-4357).
[7] Manzini, T., Lim, Y. C., Tsvetkov, Y., & Black, A. W. (2019). Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings. arXiv preprint arXiv:1904.04047.

Proposed approach or solution

We propose to solve this problem by creating a framework of standards that can become an off-the-shelf solution for checking gender bias in datasets, and offer a system that will automatically or semi-automatically check a dataset to check if it does reach this standard, and depending of the result of the output, will offer solutions to remove bias or mitigate the dataset using a combination of existing methods.

The standards need to be simple to use, flexible, and easily updated depending on new discoveries. Since such standards have not yet been proposed concerning gender bias in AI/ML applications with widespread acceptance, our automated data auditing system will draw from different areas such as law, human rights, gender studies and sociology about what constitutes gender bias and what might be solutions relevant to the existing application. We will try to simplify existing theory outlined in research and literature produced by groups such as:

An ontology will be used to describe the different aspects of these standards in a flexible way and will constitute the base layer of the system. In general, an ontology is an explicit specification of a conceptualization [1], and as such a definition can be vague [2]. We will use it to refer to rules and the relationship between them in order to represent and generate knowledge.

After designing a framework of standards, we will then develop the second layer of the ontology which will describe the necessary variables needed in a semi-automatic system to establish the quality of the datasets as a reflection of the true underlying population in regards to gender but also other important characteristics such as race. In a third step, the system will give some solutions of how the datasets could reach the standards created in the first step. The output of the system will be a quantitative and qualitative evaluation of the dataset as an estimation of the “true population” and a set of recommendations for the developer to create or modify the dataset to reach the standard.

For example, a developer that would like to implement a system that helps to detect heart disease in a patient could use some historical electronic health record (EHR) data to train their system. Yet some of these EHR datasets contain bias, as research started to show. If the system was used to test an elderly woman for cardiac disease, for example, symptoms differ from those of white males whose data is often used to train a system. There is a high risk that this woman’s heart disease is classified in the wrong category, an error which is important and could be life-threatening. The use of a dataset composed of biased historical data could lead to very important health issues for a large proportion of the population. In addition, developers using these data will not necessarily be aware of the bias contained in the dataset. By using our system, the developer will upload the dataset in the system, and our system will audit the dataset. In this case, the output of the system will be that the number of women and men should be equally represented in the training data, as with the proportion of other minorities. Our automated auditing system would in this case recommend to oversample the underrepresented populations to mitigate the statistical bias.

This approach aims to address the problem of users who create a dataset, or use an existing dataset, which contains and reinforces gender bias. As explained in the summary of our proposal, the large majority of these developers are not necessarily aware of the existing bias in these datasets, nor of their unconscious gender bias which influence the content of the data set. Moreover, the developers may not be aware of the methods to unbiased or mitigate the dataset they wish to use. The proposed solution has a double impact, it improves the datasets and educates the developers on gender bias and their negative impact on gender equity in our societies. This solution enables a positive circle between machines learning and humans learning, on how to avoid reproducing gender discrimination through AI. The process is mutually reinforcing and with durable positive impact to gender equity and the sustainable development goals.

If the system and the standards that we create can become a socially valued model, perhaps through the publicity offered by AI for Good through the ITU and Xprize and additional social marketing, programmers use the system to check their data as a regular part of creating new applications. By being user-friendly and quick, this system will both encourage continued innovation and encourage an end to use of gender biased datasets.

The proposed solution will be built iteratively, continuously incorporating feedback from the users of the system and from the state of the art in areas such as sociology, gender studies, law and statistical studies. The feedback from the user could also help to discover unconscious bias that has not yet been discovered.

[1] Gruber, Thomas R. "Toward principles for the design of ontologies used for knowledge sharing." International journal of human-computer studies 43, no. 5-6 (1995): 907-928.
[2] Guarino, Nicola, and Pierdaniele Giaretta. "Ontologies and knowledge bases." Towards very large knowledge bases (1995): 1-2

Framework used (ethics, privacy, etc)

In addition to the gender frameworks identified above from the fields of law and human rights, we will also use those frameworks to respect privacy and uphold ethical standards. The Vienna Manifesto on Digital Humanism [1] and the ACM code of ethics [2] are relevant standards, as well as the People’s Framework for a Good AI Society [3].

Human subjects are not used but we will need to respect the privacy of each dataset user in getting feedback from them on their use of the model and incorporate privacy protecting technologies and their latest standards, including the Pan European Privacy Preserving Proximity Tracing System and the GDPR.

[1] https://www.informatik.tuwien.ac.at/dighum/index.php
[2] Anderson, Ronald E., ed. "ACM code of ethics and professional conduct." Communications of the ACM 35, no. 5 (1992): 94-99.
[3] Atomium European institute, AI4People Ethical Framework (2018) available at https://www.eismd.eu/featured/ai4peoples-ethical-framework-for-a-good-ai-society/

Relevant existing work

The aim of this project is to build on top of the existing research. Today, researchers already proposed some interesting approaches to mitigate and find bias. However, nobody proposes gathering all these methods, tools, and datasets in a knowledge base that will offer the best option depending directly and easily on what the user wants to do. Works such as Bellamy et al [1] and Saleiro et al [2] offer already some interesting solutions to the detection and mitigation of datasets, but they either focus only on detection of bias or offering very narrow solutions.
In this project, the ontology will be built from knowledge of experts and text (sociology, gender studies, law) to extract the standards. And for the output, it will be based on the research or identifying bias and how to mitigate it such as [3,4,5]. The idea of extracting rules from law, human rights, gender studies and sociology to help address issues of gender bias, is novel. But logical rule extraction from similar resources has been used for other applications such as formalising medical law into logical rules to design medical decision support system (MDSS), extracting rules from regulations for automating rulebook management [6], extracting rules from text for retrieval and classification [7] or extracting rules from legal documents to build ontologies [8].

[1] Bellamy, R. K., Dey, K., Hind, M., Hoffman, S. C., Houde, S., Kannan, K., … & Nagar, S. (2018). AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. arXiv preprint arXiv:1810.01943,
[2] Saleiro, P., Kuester, B., Hinkson, L., London, J., Stevens, A., Anisfeld, A., … & Ghani, R. (2018). Aequitas: A bias and fairness audit toolkit. arXiv preprint arXiv:1811.05577.
[3] Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai El Sherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, & William Yang Wang. (2019) Mitigating Gender Bias in Natural Language Processing: Literature Review.
[4] Bolukbasi, T., Chang, K. W., Zou, J. Y., Saligrama, V., & Kalai, A. T. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Advances in neural information processing systems (pp. 4349-4357)
[5] Manzini, T., Lim, Y. C., Tsvetkov, Y., & Black, A. W. (2019) Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings. arXiv preprint arXiv:1904.04047
[6] Wyner, Adam Z., and Wim Peters. “On Rule Extraction from Regulations.” In JURIX, vol. 11, pp. 113-122. 2011.
[7] Kraft, Donald H., M. J. Martın-Bautista, Jianhua Chen, and Daniel Sánchez. "Rules and fuzzy rules in text: concept, extraction and usage." International Journal of Approximate Reasoning 34, no. 2-3 (2003): 145-161
[8] Mauro Dragoni, Serena Villata, Williams Rizzi, Guido Governatori. Combining NLP Approaches for Rule Extraction from Legal Documents. 1st Workshop on MIning and Reasoning with Legal texts (MIREL 2016), Dec 2016, Sophia Antipolis, France. Hal-01572443

Limitations of the approach

One primary limitation of our project will be the ongoing maintenance required to constantly update and train our standard and the ontology. That said a practical solution to that challenge would be to collect adequate data from those who deploy the standard. We can use machine learning to update and maintain the ontology, which would also serve to discover new needs and new forms of bias as they emerge.

Entering these either automatically or manually into the knowledge base will be challenging. However, as this will be designed as an ontology, new inference could be made, and not every new rule will need to be described but could be inferred from the already existing knowledge in the base and the axioms that will be designed in the base. The second limitation will be the size of the dataset to test. However, with the growing computational capacity, this limitation should be easy to overcome.

Ethical implications of the approach

The solution we propose is very ethical friendly. In fact, our project is somehow conceived as an exercise for the existing machine learning process to comply with the ethical standards in the matter. The datasets used to train AI present a fundamental ethical problem and a major ethical concern, they are currently biased. The purpose of our proposal is to correct that and make machines actors of positive change for gender equality and sustainable development, make them allies.
It is one major contribution of our proposal, to indirectly address this ethical concern. We not only suggest to improve the AI and avoid further discrimination deriving from the biased datasets, but also to improve the makers which are themselves biased in that process. Addressing the root causes. The application of our solution will ensure better job distribution and will profit to a fairer market access for women and minorities. The use of the AI which will be trained with unbiased datasets will bring greater equality, les discrimination in our societies, and a more balance delivery of services and wealth.

The Ai is a work in progress and it is not immune to mistakes, and in this instance its main mistake is discrimination against women and minorities. In our case, humans are making the same mistakes as the machines by collecting uniformed datasets. The solution we propose, aims firstly at improving the benefice of the use of the AI to our societies as a whole, but also to correct the mistakes the machines are currently making. Moreover, our approach is circular and interdependent. It permits to the machines and the developer to correct and check each other's gender bias as time goes by, it improves protection for the civilians and guarantees continued monitoring of the outcomes for the users. The project does not impact the autonomy of the machines functioning, that is out of the subject of our proposal. The developer will gain greater control over the machines thus making it more reliable and safe.

THE TEAM

We are a team of expert from UHNR and Cardiff University.

Hélène de Ribaupierre

Hélène de Ribaupierre

Academic researcher and lecturer, PhD in (Information Retrieval, Web Semantic and Knowledge Representation) from the University of Geneva, expert in AI and computer science.

Alexia Zoumpoulaki

Alexia Zoumpoulaki

Machine, Statistical Learning expert in time series physiological data from Greece, PhD in Computer Science (Computational Neuroscience) - University of Kent, UK, currently lecturer in Cardiff University and researcher at Cardiff University.

Dorina Xhixho

Dorina Xhixho

Former Albania lead negotiator to the Human Rights Council, committed women’s and LGBTIQ+ rights activist. Expert in gender and human rights with over ten years of experience with the UN and International Organizations.

Eric N Richardson

Eric N Richardson

Eric Richardson - International lawyer with a 30-year career as a diplomat, attorney and journalist, dedicated to addressing discrimination in practice and at the UN. Former US diplomat currently presiding the NGO UNHR, Geneva.

Jane Galvao

Jane Galvao

Jane Galvao – PhD public health from Brazil. Expert in health and civil society organizations with deep experience in infectious disease data and modeling.

Contact us

Please send us an email to contact@hadej.ch