Models matter less than what we make of them, especially in Natural Language Processing.
Right now, we find ourselves in a race to harness the full potential of Large Language Models (LLMs).
Those models detect patterns in news faster than we can say "breaking story" and translate text in numbers in nanoseconds.
The main slowing-down force today is our ability to correctly define the use cases. And ChatGPT won't do that for us.
Many of our language models at AMS have stood the test of time, built upon the enduring transformers framework that underpins the recent wave of colossal generative models. We just use the technology in a different way, narrowly specialised.
What is really striking here is that we keep discovering ways to deploy even well-established models.
For instance, we introduce today the News Balance. Neither a new model, nor a new standard data set, but a fresh approach to harnessing Language Models in our domain.
It does illustrate the mental shift we need to do, from trying to replicate existing data with LLMs to imagining how LLMs can create new data we have never had access to.
The two first articles explain the approach and provide examples. If you are curious about the topic and want to discuss it, feel free to drop us a line.
We are constantly asked about benchmarking our NIPI time series against CPI, so you will also find an update featuring the latest US core CPI data below.
We hope you enjoy the reading!
|