Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Through Knowledge Sharing to Singularity, Accelerated By LLMs
Latest   Machine Learning

Through Knowledge Sharing to Singularity, Accelerated By LLMs

Last Updated on February 27, 2024 by Editorial Team

Author(s): IVAN ILIN

Originally published on Towards AI.

LLMs are one of the pinnacles of human knowledge that has transformative potential comparable to the Internet. How did we come to that? Why do knowledge sharing and knowledge flow play a crucial role in our world’s acceleration, and why are LLMs the lapis philosophorum that could bring us to the singularity?

I’ll briefly outline some knowledge-sharing history milestones, the effects open-source had on knowledge accumulation, and the way it brought us to LLMs.
Then we’ll stop at the current point to reflect on the effects LLMs will have in tech, science, and society, touching on recent techno-optimism e/acc philosophy that ushers humanity into the bright singularity future.

We believe intelligence is in an upward spiral [1] β€” Marc Andreessen

Some history

The ability to share knowledge might be the key distinction that allowed humankind to evolve into such a complex civilization and to become the dominant species on our planet.

Since the beginning of the times, the collective efforts to explore the unknown and find meaning in the unexplainable have driven our evolution, propelling humanity forward on a relentless quest for knowledge and understanding. Spurred by this inherent curiosity, our species’ development has been distinguished from others by the accumulation of wisdom passed down through generations.

Early humans, driven by the need to collaborate and share experiences, developed language to communicate, describe their surroundings, and transmit knowledge. This cognitive leap laid the foundation for the complex societies that would emerge over time and the foundation of science later on as the collective understanding of the world became a shared endeavor.

From 15,000 BC cave drawings to Wikipedia, all iterations of knowledge sharing tools were an invitation for collaboration. While documentation techniques have continued to get more and more sophisticated, the goal has stayed the same β€” to find meanings and share the findings. It is scientifically proven that teamwork has been a key factor in the progress, evolution, and survival of humanity.

The obvious milestones in knowledge sharing are the invention of writing around 3400 B.C. in the Schumer area and the Gutenberg press almost five thousand years later. The first one allowed to capture knowledge while the second provided its distribution to the masses grounding the Scientific Revolution resulting in the Industrial Revolution.

The quest for understanding the meaning, truth, and unraveling the mysteries of existence has compelled us to cultivate an environment where the exchange of ideas and the advancement of knowledge are paramount. During the 17th century, the majority of European countries established their Academies of Sciences, thus accelerating the exchange and validation of knowledge.

The time axis here has a logarithmic scale due to our world’s exponential acceleration

Internet era & open source

Long story short β€” a hundred years after the invention of the telephone and the radio we finally came up with the Internet. The first message was sent on October 29, 1969, from UCLA to the Stanford Research Institute β€” the universities were the first organizations involved in information sharing technology, though under the military supervision of (D)ARPA.

The Internet brought us to the globalization era with the vast availability of knowledge, and then people started sharing their solutions to popular problems so the open source was sparked. Engineers shared the source code of their projects even before the Internet, but for sure it sped things up dramatically.

Apart from sharing code people started sharing opinions, ideas, and facts on the Internet β€” speaking of knowledge sharing it is impossible to pass by Wikipedia's launch in 2001. Looking ahead, it has served the ML community a lot while building different Natural Language Understanding tools and models as a high-quality curated corpus of information.

The open-source movement gained hold with the rise of the Internet, and it has since grown into a vibrant scene with many contributors and projects. Fast forward to 2008, and we see the Github launch, providing developers with a platform to collaborate on their projects online. Since that time the open-source approach has become a solid ground for the tech scene’s exponential growth and allowed society to not only avoid a good chunk of duplicative efforts but to build a shared foundation for our current and future innovations. More than 99% of Fortune 500 companies use open-source code [2].

The whole machine learning industry since the early days was growing on open source solutions like scikit learn (2007) and then deep learning frameworks β€” TensorFlow (2015) and PyTorch (2016). Later on in the 2020s people started sharing pre-trained models weights on huggingface and referencing the arXiv papers to their implementations on paperswithcode.

These were all the building blocks of the knowledge-sharing culture, community, and tech, particularly in machine learning, that paved the way for ChatGPT and other LLMs to appear. Without that, we would never have developed AI to its state in just a decade.

This sharing culture dramatically increased our progress pace by multiplying contributions, almost instantly sharing the current state-of-the-art tech with the world and allowing every engineer to start his journey right from the current pinnacle.

In fact, the whole open source culture is a collaborative learning environment and we are all working in the zone of proximal development according to Lev Vygotsky.

LLMs and knowledge β€” a symbiotic relationship

Transformers invention and training

Additionally, this knowledge sharing process left plenty of publicly available data β€” first in the form of texts, then in code, and lately as public models and datasets. This data abundance is crucial to the LLMs training process β€” LLMs thrive on large datasets, which expand and thrive thanks to the collective knowledge accumulated and shared on the Internet through open-source platforms β€” human-generated texts from scriptures to Instagram posts, Reddit (recent $60M/y training deal with Google), massive Github and StackOverflow codebase, etc.

Let’s dig a bit into the LLMs training process. First, the GPT family models are decentralized parts of the industry-revolutionizing transformer architecture introduced in the seminal paper Attention Is All You Need. Many novel ideas were implemented at once, resulting in a complete replacement of the previous state-of-the-art variations of Recurrent Neural Networks by this new architecture. One of the pivotal ideas besides self-attention was the application of self-supervised learning to model pretraining. Transformer models had to solve two core tasks while pre-training β€” masked language modeling β€” prediction of the randomly masked word given the context and the next sentence prediction task β€” given two sentences model has to classify if they are consecutive or not.

This approach eliminated the need to create a specific dataset to train models on by manually labeling texts, allowing researchers to increase the training data size dramatically without any human work involved and to use any human texts of reasonable quality to train Transformer models β€” first high quality curated corpora like Wikipedia and later the whole Internet.

The invention of Transformers led to a new cycle in Natural Language Understanding bringing us to the era of the first capable chat-bots with retrieval and generative parts, high-quality semantic search, and multimodal generative models.

LLMs are born

A few years later OpenAI applied reinforcement learning from human feedback (RLHF) to train InstructGPT and make it safer, more helpful, and more aligned β€” labelers provided demonstrations of the desired model behavior, ranked several outputs from the models, and then used this data to fine-tune GPT-3, eventually giving birth to ChatGPT in 2022.

That’s how LLM reasoning abilities emerged β€” we may speculate that InstructGPT, having demonstrated a much better ability to follow human instructions and more coherent responses, learned this reasoning and logic from humans. And let’s step back a bit to remind ourselves that models have been learning in natural language, so our knowledge sharing mechanism worked for a machine learning model too.

Here lies a thin line between the reproduction of coherent sequences and the actual β€œunderstanding” of information encoded by those sequences. Understanding means the model’s capability not only to capture relations but also to demonstrate general logic and common knowledge[3].
GPT-4 reasoning capabilities are extensively tested in the OpenAI tech report, but if you’d like to briefly get the idea, here is my 8-month-old (yes, THAT old) post reflecting on LLM’s reasoning potential.

So we got LLMs as the reasoning engine, being capable of executing logical operations, which is already a very promising thing. But while LLM weights contain human-level reasoning capabilities they may lack relevant information β€” that’s where RAG, standing for retrieval augmented generation, comes to the stage. RAG is the most popular architecture of LLM applications providing information injection into the LLM prompt, giving LLM the knowledge to reason upon.

LLMs are the new interface for information

This RAG approach brings immense possibilities for creating knowledge assistants, capable of answering complex logical questions and fetching any data needed from multiple sources, creating reports, preparing analytical notes, comparative analysis, writing long and short texts, or even working as your thinking assistant. We are building such an intelligent knowledge interface in iki.ai β€” an assistant and a second brain for professionals & teams, but there are many more solutions in the market, focusing on specific use cases β€” perplexity.ai is fighting with Google for the mainstream information discovery tool market, Arc browser is going a step further and wants not to just merge various information pieces from different sources with an LLM, but to build a whole new interface for web aimed at information aggregation and structuring (as far as I got it from their recent video).

This ability of LLMs to generate a coherent and logically correct text provided some information is what changes the whole knowledge sharing paradigm β€” it is now possible to merge and transform information according to a specific request β€” the operation that required a human expert before.

Those who are building truly disruptive LLM-powered products are reshaping the interfaces to information.

That is the paradigm shift in the knowledge creation process as we know it β€” before the dots were connected only in human brains. Scientists studied, adopted thinking patterns, and research methods, and consumed vast amounts of information to come up with an innovation β€” adding a new layer of ideas to the existing knowledge.

Now an LLM can do that for you. And obviously, it does not have cognitive load threshold or memory limit issues. A research agent, augmenting a scientist’s intelligence, now can have some tools like access to scientific databases and the Internet, a goal, and some human-in-the-loop guidance. That’s not a futuristic idea, that’s how research would be done this year β€” there already are products implementing early prototypes of research assistants.

This would boost knowledge creation aka tech and scientific progress, speeding up ideation loops and research cycles dramatically plus granting almost instant access to information, thus ushering in the age of singularity β€” arguably we are entering it now.

Most of the knowledge workers may delegate some part of their daily responsibilities to LLMs right now β€” I am speaking of analysts, lawyers, researchers, and experts in general. One of the obvious problems we are taсkling in iki.ai is professional information overload β€” you may ask your knowledge assistant to distill key ideas or connect the dots from multiple contexts for you. The cognitive load, especially in the field of Machine Learning, has now reached unprecedented levels and our brains are not made to withstand this constant influx of information so having some kind of software to store your knowledge and query it in various ways becomes a necessity.

The next big cognitive frontier once solely attributed to humans is creativity β€” It could be interpreted as the ability to connect previously unrelated dots. Now these dots, or ideas, may be extracted from a context by an LLM and then connected in various ways until it clicks with the user β€” that gives birth to the first true second brains emerging.

I’ve already mentioned agents β€” LLMs capable of using external tools to complete a task β€” that’s a whole next paradigm. Proactive assistants, software, automating whole pipelines, automated interactions with GUI to integrate agentic systems with the current generation of software, and who knows what comes next.

The coming few years will change many processes and tools we got used to in the previous decade and be sure that the best minds in the industry are now working on that huge business opportunity.

Societal effects

Such drastic technological changes cannot happen without affecting society, especially in the informational world we live in after the fourth industrial revolution.

One of the philosophies, outlining a fairly positive view on this major tech shift, is Effective Accelerationism, or e/acc, coined in 2022 and substantially developed by Marc Andreessen in his Techno Optimist Manifesto several months ago. The text is remarkable and I recommend reading the original, but let’s cite some core ideas:

We believe intelligence is the ultimate engine of progress. Intelligence makes everything better. Smart people and smart societies outperform less smart ones on virtually every metric we can measure. Intelligence is the birthright of humanity; we should expand it as fully and broadly as we possibly can.

Ray Kurzweil defines his Law of Accelerating Returns: Technological advances tend to feed on themselves, increasing the rate of further advance.

We believe in accelerationism β€” the conscious and deliberate propulsion of technological development β€” to ensure the fulfillment of the Law of Accelerating Returns. To ensure the techno-capital upward spiral continues forever.

Although a bit one-sided e/acc movement is popular in the California tech community and on Twitter β€” the new AI tech creates a lot of possibilities in the market for tech entrepreneurs, engineers, and investors, and then for a lot of data-related businesses. Capitalism is an economic system based on growth, so for me, e/acc looks more like a technocratic approach with some sparkling singularity touch.

AI and LLMs are the new tech revolution, and those with tech capabilities, capital, and some audience are positioned much better to handle this new opportunity β€” successful VCs & entrepreneurs will become richer, and the measure of merits from the new tech will be proportional to the tech ownership. The least qualified employees will be the first ones to be replaced, so I do not see a better world for everyone immediately, more likely a continuous series of layoffs and turbulence due to the disrupting tech adoption. The upside is that eventually, more tedious tasks will be gone along with the low-paid, less-qualified jobs, while more companies are created, more cases are solved, and more successful VC stories happen.

At a higher level of abstraction, knowledge can fast-track happiness. Adopting a motivation-driven learning approach enriches personal and professional journeys, aiding in discovering your ikigai β€” the essence of your existence. Your β€œsecond brain” can help you accelerate the iteration loop of ideation, creation, feedback collection, and refinement, allowing for a more steep and successful learning curve.

Despite all the positive things about knowledge sharing and tech advancements that humanity fostered over the last 300–400 years, there also are a few problems caused by this unprecedented acceleration we are experiencing: psychological overload, insane rivalry in the markets, and tech knowledge getting obsolete within a month now.

The current pace of advancements in AI technology resembles an explosion. Humanity operates as a cohesive entity, exchanging information, resources, and responsibilities. The rapidity of changes mirrors that of a shock wave and it remains uncertain how human society and individual psyches will adjust to this acceleration, or if adaptation is even feasible for the majority.

Another thing is that while the unprecedented acceleration equals the opportunities a bit it creates an extreme rush and rivalry. By no means you can stop learning in such a world. While some may accept this, others simply are not prepared for it.

As our psyche is not designed for such a pace of changes, they may cause a general feeling of insecurity and uncertainty. The speed of tech growth levered by VC money leaves little chance for the common people to grasp that new world, adopt it, and adapt. The psychological effects of living in this unstable and less predictable world are far from beneficial, but no one is going to slow down; remember the letter we had last year?

Conclusion

Knowledge sharing has been the key factor in humanity's progress.
LLMs are the current pinnacle of this progress, accelerating information sharing, knowledge creation, and the economic growth spiral. That’s because LLMs are the new generation of interfaces to information, providing its transformation according to user queries and tasks, thus untapping instant knowledge sharing. Add agents, capable of completing complex tasks and pipelines, and you’ll get an augmented intelligence reality that we are facing.

The bright future of humanity in the e/acc paradigm is defined by this free knowledge flow and AI-enhanced intellectual work, accelerating tech progress and ushering us to the fabulous singularity.
LLM-powered knowledge assistants are a crucial part of this acceleration.

The world has entered a new phase, and we have to adapt faster than ever before. Better off with an assistant to help πŸ™‚

Just curious β€” is that singularity yet?

Find me on LinkedIn or Twitter to challenge the opinions & ideas shared above!

The main references are collected in my knowledge base, there is a co-pilot to chat with this set of documents: https://app.iki.ai/playlist/393.

References

[1] https://a16z.com/the-techno-optimist-manifesto/

[2] https://a16z.com/open-source-from-community-to-commercialization/

[3] https://hackernoon.com/scratching-the-singularity-surface-the-past-present-and-mysterious-future-of-llms

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓