top of page

Our Vision

The original AI paradigm was rule-based involving if-then databases and specialized scripts. ChatGPT’s digital grandmother Eliza, the first chatbot that mesmerized the masses in 1966 pretending to be a psychiatrist, was one such program.

Around 2012, a group of researchers at the University of Toronto, led by Geoffrey Hinton, recently dubbed the “godfather of AI”, proposed a new approach based on Neural Networks (NN) that could be trained on massive datasets. NNs use statistical tools such as Regression Analysis that identify dependent and independent variables to find patterns in the dataset, figure out correlations invisible to the human brain and reach some conclusions.

The results were amazing as the new bots seemed to diagnose illnesses quickly and accurately, beat chess grandmasters decisively and win the Jeopardy game show with ease. But two interrelated issues plagued the new paradigm. The NNs were black boxes whose exact functioning eluded even their designers and, secondly, bias seemed to be present in varying degrees in most of the results reached by the algorithms.

​Since the black boxes could not be tweaked, as their designers did not really know how, the only tool for data scientists and data engineers was to try to find new methods to reduce bias elements in the dataset. For instance, if facial recognition bots could not correctly identify people with darker skin, the solution would be to add millions more of such images to the training datasets.

​However, the problem did not go away, again, for two reasons. First LLMs are trained with data produced by human beings. Not only do they reflect societal biases but given their computational power, they amplify them. An early Microsoft bot called Tay was let loose on Twitter and within 24 hours it turned into a homicidal sexist and racist creature. More recently, rap lyrics produced by Chat GPT suggest that

“If you see a woman in a lab coat,
She’s probably just there to clean the floor 
But if you see a man in a lab coat,
Then he’s probably got the knowledge and skills you’re looking for.”

DALL-E is known for producing horrifyingly antisemitic images.

​In short AI algorithms are biased because they are trained with our data. And we are biased.

​Secondly, some biases are structural and are very hard or impossible to eliminate. For instance, certain groups are over or underrepresented in the actual databases such as African Americans in the American criminal justice system and healthcare respectively. There are statistical manipulations like using proxy variables but the fact remains that, for structural reasons, too many African Americans are in the criminal justice system and too few of them are part of the mainstream healthcare system and therefore health statistics.

An even better example is gender bias. The reason ML algorithms and AI applications have been unable to produce gender-neutral results is the fact that every piece of information we collect is inherently gendered. Gender permeates every aspect of social life and it is embedded in everything we do, say and understand. Consequently, data cannot be made gender-neutral.

Perhaps more importantly, while men and women and all gender groups are affected differently by the same phenomena only the male perspective is considered universal and worthy of note. For instance, men and women experience natural disasters very differently but most measures are designed with men in mind. During the COVID pandemic, many countries refused to collect sex-disaggregated data until it was shown that the virus killed significantly more men than women.

​From crash test dummies to cancer research to optimal office work temperatures, women’s presence is marked by their absence.

​What we need is not to find ways to produce gender-neutral results, but to create AI tools that will be gender-representative. Instead of eliminating gender differences, which is both impossible and undesirable, we need systems that will identify and address gendered consequences of any course of action, choice, measure, policy, or policy implementation.

Can this be done?

A New Paradigm: Knowledge Bases and Machine Reasoning

​In his seminal book The Structure of Scientific Revolution, Thomas Kuhn explained that paradigms are replaced when they are unable to provide scientific explanations for certain phenomena. It is not that some answers can not be found, the questions to get them cannot be formulated within that paradigm. We believe that the Data-driven AI paradigm is reaching that point and we need a new approach that could help us move beyond this impasse.

Enter Machine Reasoning (MR).

What is MR?

​Let’s start with Reasoning.

​According to Leon Boutou an MR research pioneer “a plausible definition of ‘reasoning’ could be algebraically manipulating previously acquired knowledge in order to answer a new question.”

As for MR systems, Jerry Kaplan notes that they divide “tasks requiring expertise into two components: “knowledge base” – a collection of facts, rules, and relationships about a specific domain of interest represented in symbolic form – and a general-purpose “inference engine” that describes how to manipulate and combine these symbols.”

In short, MR systems combine well-defined accumulated knowledge with symbolic knowledge to “understand” a problem and its context. Also known as common sense. And use that to answer a different set of questions for which it did not have data training.

​The advantage of MR systems is that datasets are not the determining element in their training. As the knowledge base provides the symbolic logic and the definitions and correlations of the dynamic contextual elements, MR systems require much smaller datasets to “understand” a complex set of relations and to reach conclusions. And their results are explainable and justifiable.

​This new paradigm was pioneered almost forty years ago by Douglas Lenat who realized that without the contextual understanding of the human mind, AI tools would not be able to go beyond Eliza-type gimmicky implementations. He proceeded to build the first “Common Sense” knowledge base known as the Cyc project. Three decades after he started, specialized versions of his common-sense knowledge base are being used in a variety of fields to solve problems characterized by a dearth of data.

​Another attempt to build a “common-sense” AI has recently been proposed by Yann LeCun the chief scientist of Meta Corporation, the parent of Facebook and Instagram. His model is a hybrid construction where the AI has a basic model of the world and symbolic logic-based reasoning and it “learns” by observing the world and comparing and altering its model based on its observations. This is known as the “Joint Embedding Predictive Architecture” of JEPA.

​Tellingly, Yann LeCun has been quite dismissive about LLMs and Generative AI, declaring them a dead-end.

​While both of these MR approaches share a common-sense emphasis, the Meta perspective is largely graphic whereas the Cyc vision is anchored in the knowledge base and symbolic logic inference capabilities. Moreover, unlike JEPA which is currently just a concept, the Cyc AI has a four-decades-old effort behind it.

​What we propose is to bring together various actors to build a “gender knowledge base” and couple it with a specialized inference engine to create a negative societal bias-free and gender-representative AI.

​If successful, the same approach can be scaled up to apply to other areas of social bias such as ethnicity, race, and religion.

​To achieve this goal we brought together a team of gender and AI experts led by Saniye Gulser Corat the former Director of UNESCO’s Division of Gender Equality who started the gender and AI discussion with two groundbreaking studies on the gender gap in digital skills and gender bias in AI algorithms. She was named by The Digital Future Society as one of the top ten women leaders in technology for 2020. In December 2020, she was selected the Global Leader in Technology by the Women in Tech global movement, and in March 2021, she was included in Apolitical 100 Most Influential People in Gender Policy 2021.

​The think tank is in the process of raising funds for this project.

​We expect to be at the forefront of the new paradigm to provide a satisfactory answer to the gender equality conundrum.

bottom of page