Revolutionising the World: The History and Impact of Large Language Models

SubQuery Network
5 min readAug 30, 2024

--

What’s the concept of LLM?

In the most basic explanation, large language models (LLMs) refer to a type of advanced software that can communicate in a human-like manner. These models have the amazing ability to understand complex contexts and generate content that is coherent and has a human feel.

LLMs have been instrumental in the development of ChatGPT, the next evolutionary step in artificial intelligence where generative AI was combined with LLMs to produce a smarter version of artificial intelligence.

At its core, an LLM is a machine learning model that can understand and generate human language via deep neural networks. The main job of a language model is to calculate the probability of a word following a given input in a sentence: for example, “The grass is ____” with the most likely answer being “green”.

The model can predict the next word in a sentence after being given a large text dataset. Basically, it learns to recognise different patterns in the words. From this process, you get a pre-trained language model.

How did LLMs came about?

Large Language Models (LLMs) weren’t created overnight.

It all began initially through experiments with neural networks and information processing systems conducted in the 1950s to allow computers to process natural language. Researchers at IBM and Georgetown University worked together to create a system that would be able to translate phrases from Russian to English automatically. This marked the beginning of machine translation research.

The idea of LLMs was first floated with the creation of Eliza in the 1960s: it was the world’s first chatbot, designed by MIT researcher Joseph Weizenbaum. Eliza marked the beginning of research into natural language processing (NLP), providing the foundation for future, more complex LLMs.

Then almost 30+ years later, in 1997, Long Short-Term Memory (LSTM) networks came into existence. Their introduction led to deeper and more complex neural networks that could handle more data. The Stanford CoreNLP suite, introduced in 2010, allowed developers to perform sentiment analysis and named entity recognition.

Next, in 2011, a smaller version of Google Brain appeared with advanced features like word embeddings that allowed NLP to understand context better. The emergence of transformer models in 2017 marked a significant turning point. Think GPT, which stands for Generative Pre-trained Transformer, can generate or “decode” new text.

Starting in 2018, researchers began building larger models, and in 2019, Google introduced BERT — a 340-million parameter, bidirectional model. BERT quickly became the go-to tool for natural language processing, powering English queries on Google Search with its ability to understand context and adapt to various tasks.

The rise of ChatGPT

As BERT improved, OpenAI’s GPT-2, with 1.5 billion parameters, amazed everyone with its ability to generate convincing text. In 2020, GPT-3, boasting 175 billion parameters, set the LLM standard and became the foundation for ChatGPT. When ChatGPT launched in November 2022, the public truly noticed LLMs’ impact. Suddenly, even non-technical users could interact with the model, sparking both excitement and concern.

The transformer model’s encoder-decoder architecture paved the way for larger LLMs like GPT-3 and ChatGPT. By using word embeddings and attention mechanisms to understand the context and prioritise information, transformers have revolutionised the field with their ability to process vast amounts of data efficiently.

Most recently, OpenAI introduced GPT-4, which is estimated at one trillion parameters — five times the size of GPT-3 and approximately 3,000 times the size of BERT when it first came out.

The Convergence of LLMs and Web3: A New Era

We already know that Web3 empowers you to control your online world, manage your data, govern platforms, and even earn from your digital presence — all made possible by blockchain technology.

Now, imagine the convergence of LLMs with Web3 — like supernovas colliding, unlocking a new realm of possibilities and technological advancements. Here’s a glimpse of what’s to come:

  • AI-powered Democracies: LLMs can analyse data and public opinion to inform governance in decentralised communities, building fairer, more responsive online societies.
  • Hyper-personalised Learning: LLMs can craft customised learning experiences, making education truly individual and effective.
  • Smart Contract Automation: LLMs can help in writing, optimising, and auditing smart contracts, making it easier for users to interact with decentralised applications.
  • Decentralised Identity Verification: LLMs can streamline and enhance identity verification processes, while Web3 ensures that user data remains private and under the user’s control.
  • Secure Data Marketplaces: LLMs can help analyze and interpret data in decentralized data marketplaces, where users can buy, sell, or trade their data securely using blockchain technology.

SubQuery’s Foray into Decentralised AI Inference Hosting

The AI inference market is currently dominated by major cloud providers like OpenAI and Google Cloud AI, which charge high fees and use user data to improve their proprietary models. In response, SubQuery is stepping in with an affordable, open-source alternative for hosting production AI models. We aim to allow users to deploy a production-ready LLM model through their network in just 10 minutes.

SubQuery’s approach is centred on decentralisation, distributing prompts across hundreds of node operators to protect user privacy and foster an open-source ecosystem. By moving away from closed-source models, SubQuery challenges the dominance of large corporations, promoting a more democratic and transparent AI landscape.

Learn more about our new vision here.

Conclusion

Language models have come a long way, from simple rule-based systems to powerful transformers like GPT-3. These advancements enable incredible applications today, from automatic translation to content generation.

Today, LLMs are more than just tools for enhancing text-based applications, they are increasingly capable of understanding and communicating with humans.

As LLMs grow in scale and capabilities, they are moving towards self-sustaining models that continuously learn and improve from generated data, enhancing operations in industrial settings.

The possibilities are endless.

About SubQuery

SubQuery Network is innovating web3 infrastructure with tools that empower builders to decentralise the future — without compromise. Our flexible DePIN infrastructure network powers the fastest data indexers, the most scalable RPCs, innovative Data Nodes, and leading open source AI models. We are the roots of the web3 landscape, helping blockchain developers and their cutting-edge applications to flourish. We’re not just a company — we’re a movement driving an inclusive and decentralised web3 era. Let’s shape the future of web3, together.

​​​​Linktree | Website | Discord | Telegram | Twitter | Blog | Medium | LinkedIn | YouTube

--

--

SubQuery Network

Pioneering fast, flexible, and scalable web3 infrastructure. Supercharge your dApp with SubQuery today. https://subquery.network/