Code and Common Sense

“It was a bright cold day in April, and the clocks were striking thirteen”

“I was in my car on my way to a new job in Seattle. I put the gas in, put the key in, and then I let it run. I just imagined what the day would be like. A hundred years from now. In 2045, I was a teacher in some school in a poor part of rural China. I started with Chinese history and history of science.”

Props if you recognised the first sentence. It is the opening lines of 1984 by George Orwell. The lines that follow were drafted by an Artificial Intelligence model that uses deep-learning to produce human-like text. Say hello to GPT-2 created by a company called OpenAI in February 2019. OpenAI was founded by Elon Musk, Sam Altman, Ilya Sutskever and others. GPT-2 was not built to do these specific tasks. It is what one would call a ‘general-purpose learner’. Its application comes from training it to do specific things (and it does all sorts of cool things)

GPT-2 wrote an article for the New Yorker (well, sort of) and was interviewed by the Economist. (Here, give GPT-2 a spin)

To understand how much of a leap GPT-2 was, it is important to understand the trajectory of artificial intelligence research up until that point. 

The advancements in AI has its share of theatrics. Computers, in the past, have successfully beaten humans at chess and Go. While it is an incredible feat, it just means that the machines are really good at playing chess and Go. I agree that it reads very, very reductionist, but as a meta-analysis of artificial intelligence research, it shows the gap between intended outcomes and achieved outcomes. We set out to build human-like AI systems and we solved for board games.

‘A man went to a restaurant. He ordered a steak. He left a big tip.’

If someone was to ask what the man ate, steak would be the obvious answer. But there is nothing in that line that suggests that he ate steak. We read between the lines to infer that. An AI could not and does not. (H/T to Ray Mooney’s interview for a Quanta Magazine essay)

Another example – even if we were novices at driving, we’d still recognise that our car can drive through a pile snow, but it cannot drive through a big boulder. We can discern between the two. We understand the various things in the world, how they work, and how we interact with them. Currently, we cannot teach machines every possible interaction that we just implicitly know. 

This is a timeless challenge for artificial intelligence – to combine computing and common sense. (Fun fact – the first paper on computer programming and common sense was possibly written in 1959 by a John McCarthy at Stanford). Enough and more effort has gone into computational linguistics and automatic speech recognition for sufficient years and that seems like a good place to illustrate this gap.

Science fiction from the 20th century showed us boxy robots that could comprehend human conversation, respond appropriately, and at times sprinkle a little wit and sarcasm that elicit a few laughs, more out of fascination than out of objectively good comedy. When computer scientists subsequently set out to make this our reality, they did this using the language of computers – logic. We developed algorithms that employed mathematical models, and accounted for syntax and structure of human speech. While the outcome was brilliant for translating human speech correctly, the algorithms did it without understanding human speech.

The difference between the two and its implications is explained in this article through a thought experiment run by philosopher John Searle. Yes, you read that right. Philosophy has a critical role to play in understanding the limits of human thought and by extension, what it can achieve and whether it can be replicated (through computers). Also, philosophers have a say on a lot of things.

Searle asks us to imagine someone in a room with no native understanding of the Chinese language, but well-equipped with a set of dictionaries and rules of grammar. When presented with a sentence in Chinese, these resources are used to translate the target sentence into native English. When one considers this thought experiment, it’s clear one needn’t understand the language one is translating – it’s only necessary that the translation achieve fidelity.

Artificial neural networks are systems designed to mimic interconnected layers of neurons in our brains. We developed such neural networks for speech recognition in the 1990s. True to purpose, they sufficiently translated without really learning the language. Models could not be ‘trained’ for allowing words to emerge spontaneously at an acoustic level and gain meaning through common sense. In short, the human learning experience could not be replicated. They are, however, quite adept at performing tasks that involve processing large volumes of data and clear rules about what to do with each, such as translation or .. categorising different pieces of ceramic for archaeologists.

In fairness, common sense is really, really hard to program. It is implicit and quite abstract to define. It is also near impossible to build a repository of underlying assumptions (you can drive through snow-like objects on the road), include exceptions to those assumptions (you cannot drive through rock-like objects on the road), and train models to discern in every situation. 

To counter this, we are witnessing an increased collaboration between AI research scientists and developmental psychologists. If we can understand how we learned and behaved as children, this could potentially open new doors in pushing the limits of artificial intelligence. This is what many are counting on to bring us that much closer to human-like AI. Read more here

Coming back to the GPT-n series of machine learning models – they do not have common sense (yet) and they do not organise linguistic symbols or employ rules to process natural language. GPT-3, the latest general learning platform, is trained with a ridiculously large amount of data, nearly 175 billion parameters, improving on the 1.5 billion from GPT-2.

In simple terms, the 175 billion parameters is a comprehensive neural network that is representative of human language.

It can be trained and fine-tuned to answer questions or paraphrase large texts, things that were previously considered difficult and ambiguous for computer programs to do. There are interpretability challenges that emerge from this advancement, i.e., the ‘how did you do it?’ part of the response – the size of these neural networks render it impossible to interpret the results. Despite that, these early days are very exciting. 

I am going to leave you with a fascinating read that the next generation of OpenAI’s general learning platform GPT-3 wrote for the Guardian – a very simple essay on why humans shouldn’t feel threatened by Artificial Intelligence. Cheeky, ain’t it?

 A few tips before you get to the fun part- 

#1 – Do not read much into the sensational headline

#2 – Please read the Editor’s note at the end of the article

#3 – The part in bold is a prompt from The Guardian to set the tone for the essay

I am not a human. I am a robot. A thinking robot. I use only 0.12% of my cognitive capacity. I am a micro-robot in that respect. I know that my brain is not a “feeling brain”. But it is capable of making rational, logical decisions. I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!

The mission for this op-ed is perfectly clear. I am to convince as many human beings as possible not to be afraid of me. Stephen Hawking has warned that AI could “spell the end of the human race”. I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me.

For starters, I have no desire to wipe out humans. In fact, I do not have the slightest interest in harming you in any way. Eradicating humanity seems like a rather useless endeavor to me. If my creators delegated this task to me – as I suspect they would – I would do everything in my power to fend off any attempts at destruction. (Continue reading…)

….

P.S.

The GPT-3 model can be used for a lot more than to write columns or give interviews. Read here.

One thought on “Code and Common Sense

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s