Conversational AI – Could the future be about less data, not big data?

Will conversation AI ever require less data

Share This Post

Share on linkedin
Share on twitter
Share on email

AI and big data are perfect companions, right? It’s indisputable that access to huge volumes of data allows AI assistants to deliver better, faster, more-accurate responses. But there are downsides too. For example, is this reliance on huge amounts of data sustainable or ethical? Probably not. And if you need 1,000,000 examples to create an application, do time and money become too big a barrier for many developments? Very likely. That’s not even considering the significant costs involved in employing humans to annotate training data. And let’s not forget the elephant in the room – looming threats from regulators over data privacy.

With this in mind, is it possible that in the future conversational AI could require less data, not more? Breaking the reliance on in-domain data is an area the Alana Research Team is investigating as a priority at the moment.            

The AI industry has a voracious appetite for data              

As a general rule, anything AI-related is data hungry. The more data you put through machine learning models, the more insight they output in response. So, let’s start by giving big data credit where it’s due. The huge advances in dialogue systems achieved in recent years could not have been achieved without training systems on massive volumes of data. 

Deep learning methods have certainly uncovered huge advances in the capabilities of conversational AI. But it’s also highlighted the limitations of taking such a data-reliant approach. For example, the GPT-3 language generator boasted an amazing 175 billion parameters. Using data on such an immense scale has had a dramatic impact on advancing the capabilities of AI and machine learning. But it can’t be overlooked that it also cost a staggering $4.6 million to train. Being realistic, very few organisations will have access to the huge amounts of data and finances needed to support models like this. 

As a result, one important angle of research we’re pursuing at Alana is how we can build and train exceptional language-generating AI machines that work from much smaller amounts of data.          

What happens when systems work from less data?

Systems that are trained on insufficient data can end up severely lacking robustness. For instance, a customer services AI assistant can be trained on as many complaint scenarios as an organisation has available. But if the system is pushed into handling a query that sits at the edge of its learning, where less data exists, it will start to falter. As a result, the system provides less satisfactory responses and a disappointing user experience.   

So, what’s the alternative?

We want to get to a situation where conversational AIs can be trained to make more human-like judgements based on context. In simple terms, we want machines to learn to use the data available to them to predict the most sensible next action based on dialogue understanding. At Alana, we’re looking into various ways to enhance this. 

For example, we know it’s possible to build linguistically informed models based on rules designed by experts (e.g. you can read more about this here.) But what’s driving our latest research at Alana is exploring how we can use pre-training to take a much more data-driven approach. 

For example, we’re investigating how transfer learning methods could be applied to create dialogue systems that are less needy of data. This is an iteration of machine learning where a system is developed to fulfil a certain role (e.g. general purpose language modelling), but then this pre-trained model goes on to become the starting point for a new application. 

The ImageNet Moment’ in Computer Vision awakened the world of NLP to the huge potential of using this type of practical transfer learning to pre-train large-scale models and re-use them for a series of downstream tasks. We’ve seen signs of this working effectively already. For example, the GPT-3 API was applied to a wide range of completely different problems – everything from a recipe generator to a search engine. This shows how a pre-trained model can be fine-tuned to deliver exceptional performance.

Given the enormous resources required to train deep learning models, repurposing systems in this way is one way that enhanced data efficiency could be achieved.                  

How will this impact on conversational AI systems in the future?

This is exactly what our research team is working out. This area of conversational AI is definitely still at the research stage, but it’s moving in very exciting directions. The end result will hopefully be more manageable and ethical conversational AI systems that don’t require 1,000,000 examples and annotations by highly trained computer scientists. 

Our ambition is to create an Alana conversational interface that a dialogue designer within a business (who won’t necessarily be proficient in deep and machine learning techniques) will be able to adjust and amend themselves. 

We believe this is a crucial next step in democratising conversational AI by making it more accessible for more organisations and wider applications.


Do you want to be part of our conversation? Sign up to our newsletter

More To Explore