Chaos, Complexity, Emergence & Technological Singularity
Last Updated on May 24, 2022 by Editorial Team
Author(s): Ted Gross
Originally published on Towards AI the World’s Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses.
Artificial Intelligence
Hiding In Plain Sight Behind Artificial Intelligence
Preface:
βChaos, Complexity, Emergence & Technological Singularityβ is essentially a redacted version of a more extensive work published in October 2021 (βApplied Marketing Analyticsβ Volume 7 #2 a journal of Henry Stewart Publications) presenting the theory of βEmanating Confluenceβ dealing with the progression of Artificial Intelligence (AI)βββfrom Chaos Theory to Complexity Theory to Emergence and then the Technological Singularity. (If interested, you are welcome to message me here in Medium or on LinkedIn for a complimentary PDF copy of βEmanating Confluenceβ).
Chaos Theory
βIntroduce a little anarchy. Upset the established order, and everything becomes chaos. Iβm an agent of chaos. Oh, and you know the thing about chaos? ItβsΒ fair!βΒΉ
βInvention, it must be humbly admitted, does not consist in creating out of void, but out of chaos; the materials must, in the first place, be afforded: it can give form to dark, shapeless substances, but cannot bring into being the substance itself.βΒ²
In 1961, when experimenting with weather pattern information, a lack of caffeine led to Edward Lorenz keying a shortened decimal into a series of computed numbers. This mistake gave birth to βchaos theoryβΒ³ and the βbutterfly effect.β΄ The subsequent publication of his paper βDeterministic nonperiodic flowββ΅ prompted a debate on the chaos that is still very much alive today. The butterfly effect has been prosaically described as the idea that a butterfly flapping its wings in Brazil might cause a tornado in Texas. Alternatively, as Lorenz himself putΒ it:
βOne meteorologist remarked that if the theory were correct, one flap of a sea gullβs wings would be enough to alter the course of the weather forever. The controversy has not yet been settled, but the most recent evidence seems to favour the seaΒ gulls.ββΆ
At the heart of chaos theory lies the seemingly modest statement which postulates that small, even minute events can influence enormous systems leading to significant consequencesβββhence the butterfly effect, or as defined within chaos: βsensitivity to initial conditions.
Surprisingly enough, one can express chaos in mathematical terms. Randomness suddenly becomes an orderly disorder, which means not only in existential terms but in real-world scenarios that there is a hidden order toΒ chaos.
βThe modern study of chaos began with the creeping realisation in the 1960s that quite simple mathematical equations could model systems every bit as violent as a waterfall. Tiny differences in input could quickly become overwhelming differences in outputβββa phenomenon given the name βsensitive dependence on initial conditionsβ.ββ·
To illustrate the butterfly effect, advocates of chaos theory often cite the proverb βFor want of aΒ nailβ:β
βFor want of a nail, the shoe was lost.
For want of the shoe, the horse was lost.
For want of the horse, the rider was lost.
For want of the rider, the battle was lost.
For want of the battle, the kingdom was lost.
And all for the want of a horseshoe nail.β
The lesson is obvious: the lack of something as inconsequential as a single nail can cause the loss of a kingdom. When so many minor events can have such enormous consequences, how can one even attempt to predict the behavior of systems? This nagging question remained in the background for centuries. Chaos was attributed to a supreme being, karma, or plain old luck. Despite its ubiquity, there was no way to foretell or control the chaos. Events are random, and random events defy prediction. Or so everyone believed.
Almost as a genetic imperative, the brain seeks βpatternsβ. However, the terms βchaosβ and βpatternsβ seem to be polar opposites. If there is a pattern to be discerned, then how can there be chaos? Furthermore, if chaos prevails, then how can there be an underlying pattern?
In 2005, Lorenz condensed chaos theory into the following: βChaos: When the present determines the future, but the approximate present does not approximately determine theΒ future.βΈ
Ancient man sought patterns in the night sky filled within the chaos of hundreds of millions of stars and found the constellations. Modern man seeks underlying patterns of behavior and activity in everyday life. The recent COVID-19 outbreak is an example of such a pursuit. At present, at least, there is less focus on the source of the proverbial nail, but intense interest in such patterns as to how the virus spreads, how specific prophylactic measures have worked, and the patterns of historical outbreaks such as the black death (bubonic plague) and the Spanish flu after the First World War, in the hope that it is possible to apply pattern recognition to combating the spread of the coronavirus. The identification of such patterns assists in combating the virus by making it possible to project possible future outcomes based upon the present situation.[ix] (It will be interesting to see how Amy Webb analyses this in her upcoming book, βThe Genesis Machineβ.ΒΉβ°)
βChaos appears in the behaviour of the weather, the behaviour of an airplane in flight, the behaviour of cars clustering on an expressway, the behaviour of oil flowing in underground pipes. No matter what the medium, the behaviour obeys the same newly discovered laws. That realisation has begun to change the way business executives make decisions about insurance, the way astronomers look at the solar system, the way political theorists talk about the stresses leading to armed conflict.βΒΉΒΉ
Chaos theory specifies not only that there are geometric patterns to be discerned in the seemingly random events of a complex system, but also introduces βlinearβ and βnonlinearβ progressions. Linear progressions go from step A to step B to step C. Such systems lend themselves to predictability. Their patterns are apparent even before they begin. They take no heed of chaos as they have clear beginnings with specific steps along the way. Unfortunately, the way our brains handle data is mainly linear because this is how the majority of people are trained to think fromΒ birth.
βLinear relationships can be captured with a straight line on a graph. Linear relationships are easy to think about: the more the merrier. Linear equations are solvable, which makes them suitable for textbooks. Linear systems have an important modular virtue: you can take them apart, and put them together againβββthe pieces add up. Nonlinear systems generally cannot be solved and cannot be added together. In fluid systems and mechanical systems, the nonlinear terms tend to be the features that people want to leave out when they try to get a good, simple understandingΒ β¦ That twisted changeability makes nonlinearity hard to calculate, but it also creates rich kinds of behaviour that never occur in linear systems.βΒΉΒ²
βHow, precisely, does the huge magnification of initial uncertainties come about in chaotic systems? The key property is nonlinearity. A linear system is one you can understand by understanding its parts individually and then putting them togetherΒ β¦ A nonlinear system is one in which the whole is different from the sum of the partsΒ β¦ Linearity is a reductionistβs dream, and nonlinearity can sometimes be a reductionistβs nightmare.βΒΉΒ³
For instance, structured query language (SQL) purists set up data stores with traditional relationships they define for the system. This may be wonderful for basic name-address-looked-at-bought-something systems, as what is being analyzed rests on a previously decided relationship, whether one-to-one or one-to-many. One also has absolute control over the data going into theΒ system.
Both those who teach and implement SQL programming are blind to chaos and reject its consequences. They rid their systems of βnoiseβ by making these systems adhere to previously defined rules. Information entropy is predefined in that there can be no βsurpriseβ in the data, and bias exists from the initial stage. Suppose the data are not of specific predefined composition (eg string, numeric, binary, etc). In that case, the data will simply not enter the system, even if the currently rejected data may be crucial later onβββthe information is forever lost to the system. In short, pure SQL programming disallows the viewing of data in nonlinear terms, which can have disastrous consequences to both data andΒ AI.
βTextbooks showed students only the rare nonlinear systems that would give way to such techniques. They did not display sensitive dependence on initial conditions. Nonlinear systems with real chaos were rarely taught and rarely learned. When people stumbled across such thingsβββand people didβββall their training argued for dismissing them as aberrations. Only a few were able to remember that the solvable, orderly, linear systems were the aberrations. Only a few, that is, understood how nonlinear nature is in itsΒ soul.βΒΉβ΄
Data systems today are as chaotic as predicting the weather. Scraping data means one does not know what to expect from the data. One must seek patterns and information once that data lake takes form. Approaching data in a linear manner will always create bias as one decides what data to find, in what order to find the data, what the structure of the data must be, and the rules to which it must adhere. Neither linear thinking nor traditional SQL structures can provide a proper answer. Information entropy, bias, and the application of Bayesβ theorem cannot reveal adequate results because insufficient information is collected. Failure to conduct data analysis correctly will always lead to massively erroneous results inΒ AI.
The βeureka momentβ of chaos theory boils down to a single numberβββ4.6692016βββotherwise known as βFeigenbaumβs constant.βΒΉβ΅ The essential word here is βconstantβ, although few scientists or mathematicians would have believed it was possible until it was categorically proven. Simply stated, what Mitchell Feigenbaum discovered was that there is a universality in how complex systems work.ΒΉβΆ Given enough time, this constant will always appear in a series. Moreover, this constant is universal. Chaos swings like a pendulum along a mathematical axis. Once one accepts disorder and chaos, one can plan for itβββeven within large systems. The fact that even within chaotic systems one can find stability creates a whole new universe of possibilities
βAlthough the detailed behaviour of a chaotic system cannot be predicted, there is some βorder in chaosβ seen in universal properties common to large sets of chaotic systems, such as the period-doubling route to chaos and Feigenbaumβs constant. Thus, even though βprediction becomes impossibleβ at the detailed level, there are some higher-level aspects of chaotic systems that are indeed predictable.βΒΉβ·
Chaos theory has its limits, however, as there will always be more than one butterfly flapping its wings. In many systems, the sensitivity to initial conditions will eventually become too complex for any type of prediction. Lorenzβs weather prediction, for example, lasts for a short period of a few days at most. As it stands, there is no way to discover what βinitial conditionβ may become significant in four daysβ time. One may view short-term weather forecasting as a deterministic system; however, according to chaos theory, random behavior remains a possibility even in a deterministic system with no externalΒ source.
βThe defining idea of chaos is that there are some systemsβββchaotic systemsβββin which even minuscule uncertainties in measurements of initial position and momentum can result in huge errors in long-term predictions of these quantitiesΒ β¦ But sensitive dependence on initial conditions says that in chaotic systems, even the tiniest errors in your initial measurements will eventually produce huge errors in your prediction of the future motion of an object. In such systems (and hurricanes may well be an example) any error, no matter how small, will make long-term predictions vastly inaccurate.βΒΉβΈ
This leads to complexity theory.ΒΉβΉ
βIn our world, complexity flourishes, and those looking to science for a general understanding of natureβs habits will be better served by the laws of chaos.βΒ²β°
Complexity Theory
βThe complexity of thingsβββthe things within thingsβββjust seems to be endless. I mean nothing is easy, nothing is simple.βΒ²ΒΉ
βComplex systems with many different initial conditions would naturally produce many different outcomes, and are so difficult to predict that chaos theory cannot be used to deal withΒ them.βΒ²Β²
Nonlinear systems are not so easily defined nor understoodβββand so complexity (also known as βcomplex systemsβ) enters the realm of investigation. Complexity theory intertwines information theory, entropy, and chaos theory, and leads towards βemergenceβ. Approaching complexity as a single science with one definition or uniform topic is impossible. There is an almost infinite possibility of initial events all working together, somehow, mysteriously, towards an unknownΒ goal.
For an introduction to the nature of complexity, consider a colony ofΒ ants:
βColonies of social insects provide some of the richest and most mysterious examples of complex systems in nature. An ant colony, for instance, can consist of hundreds to millions of individual ants, each one a rather simple creature that obeys its genetic imperatives to seek out food, respond in simple ways to the chemical signals of other ants in its colony, fight intruders, and so forth. However, as any casual observer of the outdoors can attest, the ants in a colony, each performing its own relatively simple actions, work together to build astoundingly complex structures that are clearly of great importance for the survival of the colony as a whole.βΒ²Β³
A unique aspect of ant colonies is that there is no apparent central control or leader. Nevertheless, a colony will create ceaseless patterns, collect and exchange information, and evolve in the environment in which it finds itself. Similar behavior manifests in stock markets, within biological cell organizations, and in artificial neural networks (ANNs). Complexity appears almost everywhere, following on the heels of chaos. Mitchell has formulated an excellent (if partial) definition of complexity:
βA system in which large networks of components with no central control and simple rules of operation give rise to complex collective behaviour, sophisticated information processing, and adaptation via learning or evolutionΒ β¦ Systems in which organised behaviour arises without an internal or external controller or leader are sometimes called self-organising. Since simple rules produce complex behaviour in hard-to-predict ways, the macroscopic behaviour of such systems is sometimes called emergent. Here is an alternative definition of a complex system: a system that exhibits nontrivial emergent and self-organising behaviours. The central question of the sciences of complexity is how this emergent self-organised behaviour comes about.βΒ²β΄
There are other, perhaps more subtle ways to measure a complex system. If one measures the information entropy, one can see how much βsurpriseβ is left once the βnoiseβ is eliminated. If this surprise is above a specific, pre-set value, one can assume complexity. Alternatively, perhaps one should just look at the size of the set. For instance, DNA sequencingβββalong with the attendant possibilitiesβββis a complex system under these parameters (or actually, under any parameters).
Of course, the most significant issue in the age of AI and data stems from Alan Turingβs famous question: βCan machines think?β.Β²β΅ The consequences of such a question regarding complexity theory are enormous. If there is a possibility for thinking machines, is it possible for a machine to gain βconsciousnessβ and βintelligenceβ? If the data pool is so large as to endow thinking upon a machine, will information entropy be full of surprise for that machine? Or will the machine choose to ignore critical information as just βnoiseβ? Will the bias inherent in the data that created the ultimate complex system be used by the machine to logically propagate erroneous assumptions until the machine becomes a danger to its creators? Will the machine that thinks understand language and communication in all its forms, including voice intonations, facial expressions, the meaning of ambiguous statements that humans know intuitively how to interpret, and most importantly, emotion and sentiment?
Consider the fact that it is already possible to build ANNs that work and can teach themselves at ever-growing speed. Still, if asked how the ANN is teaching itself such complex interactions, everyone involved will either throw their hands up in despair for lack of an answer or offer theory upon a theoryβββnone of which will answer the simple question: βHow did this ANN teachΒ itself?β
βBut in a complex system such as those Iβve described above, in which simple components act without a central controller or leader, who or what actually perceives the meaning of situations so as to take appropriate actions? This is essentially the question of what constitutes consciousness or self-awareness in living systems.βΒ²βΆ
The games of go and chess are self-contained complex systems. The number of moves in these games is beyond comprehension, while each move results from sensitivity to initial conditions. βOne concept of complexity is the minimum amount of meaningful, non-random, but unpredictable information needed to characterise a system or process.βΒ²β·
In 2017, AlphaGo Zero,Β²βΈ a version of the AlphaGo software from DeepMind,Β²βΉ was the first version of AlphaGo to train itself to play go (arguably more complex than chess) without the benefit of any previous datasets or human intervention. Built upon a neural network and using a branch of AI known as βreinforcement learningβ, its knowledge and skill were entirely self-taught.
In the first three days, AlphaGo Zero played 4.9 million games against itself in quick succession. It appeared to develop the skills required to beat professional go players within just a few days, and in 40 days, surpassed all previous AlphaGo software and won every game.Β³β° Although this is narrow AI (in that it could play go and nothing else), this achievement is impossible toΒ ignore.
Demis Hassabis, the co-founder and CEO of DeepMind in 2017, stated that AlphaGo Zeroβs power came from the fact that it was βno longer constrained by the limits of human knowledge,Β³ΒΉ while Ke Jie, a world-renown go professional, said that βHumans seem redundant in front of its self-improvementβ.Β³Β²
The most chilling comment, however, came from David Silver, DeepMindβs lead researcher:
βThe fact that weβve seen a program achieve a very high level of performance in a domain as complicated and challenging as go should mean that we can now start to tackle some of the most challenging and impactful problems for humanityβ.Β³Β³
Simply put, this means it is possible to implement the AlphaGo Zero narrow AI lessons within general or strong AI. This again circles back to Turingβs all-encompassing question: βcan machines think?β and the various questions pursuant to this question.
There are rudimentary and general possibilities for prediction within chaotic systems; there is also the universality of the Feigenbaum constant, but complexity goes way beyond these rules. It contains so much surprise in the information entropy, so many points of βinitial sensitivity to initial conditions, so many systems which are not yet understood, so many possibilities of bias slipping in when the data are not βpureβ, that we remain blindly fumbling while trying to understand the nature and consequences of complexity.
βChaos has shown us that intrinsic randomness is not necessary for a systemβs behaviour to look random; new discoveries in genetics have challenged the role of gene change in evolution; increasing appreciation of the role of chance and self-organisation has challenged the centrality of natural selection as an evolutionary force. The importance of thinking in terms of nonlinearity, decentralised control, networks, hierarchies, distributed feedback, statistical representations of information, and essential randomness is gradually being realised in both the scientific community and the general population.βΒ³β΄
βWhatβs needed is the ability to see their deep relationships and how they fit into a coherent wholeβββwhat might be referred to as βthe simplicity on the other side of complexityβ.βΒ³β΅
Emergence
βEmergence results in the creation of novelty, and this novelty is often qualitatively different from the phenomenon out of which it emerged.βΒ³βΆ
Emergence may be categorized as a step in the evolutionary process. It is best perceived as a new state of being that arises from a previous state. In Mitchellβs previously discussed definition of complexity, she offers an alternative definition for a complex system, that is βa system that exhibits nontrivial emergent and self-organizing behaviors. The central question of the sciences of complexity is how this emergent self-organized behavior comes aboutβ.Β³β·
What is meant by βemergenceβ? It is difficult to explain away what our minds cannot grasp; however, to prove that emergence is actual, let us first examine emergence simplistically without considering self-organization.
When the pieces of a jigsaw puzzle are spread out, one can view the properties of each individual pieceβββits shape, size, picture, and so forth. The individual pieces are entities in and of themselves. As we attempt to put the puzzle together, our minds shift from the individual pieces to what the overall picture should look like and the shapes of the pieces required to connect together. We are, in actuality, seeking patterns. Once the puzzle is completed, a new entity emergesβββone that was not present before. Humans are creatures of emergence. We take chaotic, complex ideas and situations, and attempt to make sense of them, usually through patterns. βThe brain uses emergent properties. Intelligent behaviour is an emergent property of the brainβs chaotic and complex activity.βΒ³βΈ
The overall picture becomes coherent as it emerges from the disorder.
βEmergence refers to the existence or formation of collective behavioursβββwhat parts of a system do together that they would not do aloneΒ β¦ In describing collective behaviours, emergence refers to how collective properties arise from the properties of parts, how behaviour at a larger scale arises from the detailed structure, behaviour and relationships at a finer scale. For example, cells that make up a muscle display the emergent property of working together to produce the muscleβs overall structure and movementΒ β¦ Emergence can also describe a systemβs functionβββwhat the system does by virtue of its relationship to its environment that it would not do by itself.βΒ³βΉ
Complex systems are emergent systems. Once one discovers and encounters complexity, emergent behavior is almost inevitable at some stage. Consider the 86 billion neurons in a human brain. Each neuron does one thing or is a connector. Yet, combine those neurons into one system, and thought, consciousness, emotion, reasoning, creativity, and numerous psychological states will emerge. Alternatively, consider the stock marketβββa complex system to which much of AI has been dedicated. Each person has their own distinct reactions to the market. However, it is the combination of millions of different reactions that make up the stock marketβs βwholeβ. The result, at any given millisecond, is an emergence of a new entity. Because of complexity and sensitivity to initial conditions at that millisecond, yet another entirely new complex systemΒ emerges.
Economist Jeffrey Goldstein published a widely accepted definition of emergence in 1999ββββthe arising of novel and coherent structures, patterns and properties during the process of self-organisation in complex systemsβ.β΄β° Then in 2002, Peter Corning further expanded on this definition:
βThe following are common characteristics: (1) radical novelty (features not previously observed in the system); (2) coherence or correlation (meaning integrated wholes that maintain themselves over some period of time); (3) a global or macro βlevelβ (ie there is some property of βwholenessβ); (4) being the product of a dynamical process (it evolves); and (5) being βostensiveβ (it can be perceived).ββ΄ΒΉ
Indeed, once one takes time to view complex systems, emergence is there for all to see. It is a state which, yes, βemergesβ from complexity. As each emergent system will exhibit qualities not previously observed in the individual parts, the result is, in essence, a whole new system. Then chaos and complexity will again lead to emergence. This is not a recursive loop but an ever-expanding system.
The constant growth of computing power (whether Mooreβs lawβ΄Β² dissipates, or remains stable, or enters hypergrowth) will allow for massive computations only dreamed about a few years ago. Coupled with the information explosion, computers will be able to digest colossal amounts of information within milliseconds.
What seems always to be forgotten by those who refuse to accept the state of emergency is that the world is, by nature, chaotic. It is ruled by sensitivity to initial conditions. Chaos will always appear. Those little flaps of the butterfly wings tend to throw even the best-laid plans of mice and men into a tailspin.
Imagine a data lake being constantly fed from a multitude of sources while algorithms are imposed on the data to produce a picture from the billions of data bits. As the data lake consumes more data, the real-time image obtained from the data necessarily differs from the picture obtained just a minute before. Information entropy has changed. Bias has shifted. The results amend themselvesβββendlessly.
Now imagine a massive number of chaotic-complex-emergent systems all reaching an apex at approximately the same time. They are dynamic, and they are evolving. They are also self-organizing. At some point, these systems will begin to communicate with one another, sharing their information, having their own information entropy, linguistic capability, decision trees, and random forests with no human intervention. A new mega-system will emerge from the numerous individual emergent systems that have reached thisΒ stage.
This is the age of βtechnological singularityβ.β΄Β³
Technological Singularity
βWisdom is more valuable than weapons of war, but a single error destroys much of value.ββ΄β΄
βComputers make excellent and efficient servants, but I have no wish to serve underΒ them.ββ΄β΅
Much of the literature on technological singularity centers on the methods used to achieve it. Will it occur through βhuman-like AIβ with ears, eyes, a heart, and a brain (in the classical sense) or a disembodied form that one cannot evenΒ imagine?
The moment emergence appears, there will be no stopping a coming singularity. It is not a matter of when it will occur or under what exact conditions it will occur, but rather the very real possibility that it will be achieved.
βWhat, then, is the singularity? Itβs a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.ββ΄βΆ
βA singularity in human history would occur if exponential technological progress brought about such dramatic change that human affairs as we understand them today came to anΒ end.ββ΄β·
While Kurzweil βset the date for the singularityβββrepresenting a profound and disruptive transformation in human capabilityβββas 2045β,β΄βΈ this is not critical to the present argument. It may well happen in 2060, as Webb maintains, or 2145. The crucial point here, to reiterate, is that once emergence begins, the singularity will follow. (Kurzweil will release a new book, βThe Singularity Is Nearerβ,β΄βΉ in 2022, which might update his current predictions.)
As emergence begins, the information explosion will become the long-prophesied intelligence explosion. There are a few prerequisites for this to happen, although this list is by no means exhaustive.
The world of AI is currently infused with ML, which encompasses within it many different evolving sciences. ML makes use of data available by using NLP, PR, and DL. This lies at the heart of advance, along with continued growth in the amount of data available. Patterns lie at the essence of human thought and are crucial for intelligence.
βThe patterns are important. Certain details of these chaotic self-organising methods, expressed as model constraints (rules defining the initial conditions and the means for self-organisation), are crucial, whereas many details within the constraints are initially set randomly. The system then self-organises and gradually represents the invariant features of the information that has been presented to the system. The resulting information is not found in specific nodes or connections but rather is a distributed pattern.ββ΅β°
βThe sort of AI we are envisaging here will also be adept at finding patterns in large quantities of data. But unlike the human brain, it wonβt be expecting that data to be organised in the distinctive way that data coming from an animalβs senses are organised. It wonβt depend on the distinctive spatial and temporal organisation of that data, and it wonβt have to rely on associated biases, such as the tendency for nearby data items to be correlatedΒ β¦ To be effective, the AI will need to be able to find and exploit statistical regularities without such help, and this entails that it will be very powerful and very versatile.ββ΅ΒΉ
Although ML is in its infancy, as is most of AI, PR and DL will continue to make inroads and augment a computer decision-making process while digesting patterns. Couple the technology with the data available and the ever-increasing speed at which data can be stored, accessed, and analyzed, and we are approaching the moment when βgeneralβ, also known as βstrongβ AI,β΅Β² may be possible. However, ML and all its constructs require other factors to make this aΒ reality.
βThe massive parallelism of the human brain is the key to its pattern-recognition ability, which is one of the pillars of our speciesβ thinkingΒ β¦ The brain has on the order of one hundred trillion interneuronal connections, each potentially processing information simultaneously.ββ΅Β³
Creating a machine intelligence capable of such parallelism, which would then engender PR at a human or above level, is not yet viable. However, we are certainly on track to do so. Once this level of sophistication has been achieved, the systems will grow exponentially. βExponentialβ is a key term here, as many do not understand the implications of exponential growth. For example, to explain βexponentialβ in simple terms, eg doubling at a constant rate, take one grain of rice and place it on one chessboard square. Now exponentially increase that grain of rice on each square so that each square will contain double the amount of rice on the previous square. By the time one is done with the exponential experiment, there will be over 18 quintillion grains of rice on that one chessboard. Imagine this type of exponential growth exploding in parallelism. Imagine such an increase in humankindβs ability to analyze data, putting aside the growth in dataΒ itself.
Once this stage is reached, humanity enters the age of possible βsuperintelligenceβ.
βA machine superintelligence might itself be an extremely powerful agent, one that could successfully assert itself against the project that brought it into existence as well as against the rest of the world.ββ΅β΄
βThe singularity-related idea that interests us here is the possibility of an intelligence explosion, particularly the prospect of machine superintelligenceβ.β΅β΅
When a superintelligence appears, it will directly result from the intelligence explosion. However, superintelligence is not something one controls with a flip of the switch or a code change. Once a real superintelligence appears (or better said, once an intelligence explosion is on the cusp of occurringβββor has occurred) it may be too late. Writing code to change something here or there will no longer do anyone anyΒ good.
βA successful seed AI would be able to iteratively enhance itself: an early version of the AI could design an improved version of itself, and the improved versionβββbeing smarter than the originalβββmight be able to design an even smarter version of itself, and so forth. Under some conditions, such a process of recursive self-improvement might continue long enough to result in an intelligence explosionβββan event in which, in a short period of time, a systemβs level of intelligence increases from a relatively modest endowment of cognitive capabilitiesΒ β¦ to radical superintelligence.ββ΅βΆ
Following the intelligence explosion and the creation of a singularity, this βrecursive self-improvementβ is perhaps the superintelligenceβs penultimate capabilityβββa stage where the intelligence created can correct any mistakes and errors it βjudges and thinksβ it may have made or may have been made to it. This is the actual βevent horizonβ, as once recursive self-improvement is viable, the intelligence explosion will be a logical consequence.
βAt some point, the seed AI becomes better at AI design than the human programmers. Now when the AI improves itself, it improves the thing that does the improving.ββ΅β·
βonce an AI is engineered whose intelligence is only slightly above human level, the dynamics of recursive self-improvement become applicable, potentially triggering an intelligence explosion.ββ΅βΈ
The lexicon, however, must be clear: data and information are what is collected and analyzed. Neither term connotes actual knowledge or intelligence.
βInformation is not knowledge. The world is awash in information; it is the role of intelligence to find and act on the salient patternsβ.β΅βΉ
Simply put: as data are amassed and chaos and complexity ensue, systems will be flooded with massive amounts of information. These systems will demonstrate nonlinear βthoughtβ processes in terms of what they are attempting to analyze and the predictions made. Concurrently, ML will make considerable strides in PR and DL, bringing us closer to the capabilities of parallelism as the power of the hardware and software grows. Indeed, actual exponential growth, especially in data, may be achieved, adding to the information explosion.
Complex systems with massive amounts of data will emerge in our computerization. At some impossible-to-predict moment in time (despite Kurzweilβs prophecy), these complex systems will emerge into yet larger systems and continue the process of chaos-complexity-emergence. Whether by human hand or by self-generated computer capability, these systems will begin to communicate with one another, merging again into evermore aware and extensive systemsβββemergence on an all-encompassing scale.
As these systems communicate, they will gain information based upon all the data they are analyzing. An intelligence will emerge, capable of recursive self-improvement due to the amount of data and capabilities inherent within all the chaotic and complex systems that gave birth to it. At that point, a superintelligence will appear and the intelligence explosion will have begun. The technological singularity will have reached a technological eventΒ horizon.
βThe βevent horizonβ is the boundary defining the region of space around a black hole from which nothing (not even light) can escape.ββΆβ°
βJust as we find it hard to see beyond the event horizon of a black hole, we also find it difficult to see beyond the event horizon of the historical singularity.ββΆΒΉ
βThis, then, is the singularity. Some would say that we cannot comprehend it, at least with our current level of understanding. For that reason, we cannot look past its event horizon and make complete sense of what lies beyond. This is one reason we call this transformation the singularity.ββΆΒ²
As discussed, the superintelligence capable of recursive self-improvement will have no use for its human creators. The actions of this superintelligence will depend a great deal on how closely it is possible to inculcate information with human capabilities. It does not matter if it looks like an avatar or spreads between a trillion computers in the cloud. What truly matters is that once this superintelligence emerges, it can master the language and all its nuances, have common sense and show creativity in a positive sense. Probably the most critical, crucial, and fundamental characteristics are if it will understand emotion andΒ empathy.
βThis vision of the future has considerable appeal. If the transition from human-level AI to superintelligence is inevitable, then it would be a good idea to ensure that artificial intelligence inherits basic human motives and values. These might include intellectual curiosity, the drive to create, to explore, to improve, to progress. But perhaps the value we should inculcate in AI above all others is compassion toward others, toward all sentient beings, as Buddhists say. And despite humanityβs failingsβββour war-like inclinations, our tendency to perpetuate inequality, and our occasional capacity for crueltyβββthese values do seem to come to the fore in times of abundance. So, the more human-like an AI is, the more likely it will be to embody the same values, and the more likely it is that humanity will move toward a utopian future, one in which we are valued and afforded respect, rather than a dystopian future in which we are treated as worthless inferiors.ββΆΒ³
To be clear. One slight error, one bias in the wrong place, one dismissal of information entropy and the βsurpriseβ in the message, failure to ensure the systems understand the actual human conditionβββwill be disastrous.
βA flaw in the reward function of a superintelligent AI could be catastrophic. Indeed, such a flaw could mean the difference between a utopian future of cosmic expansion and unending plenty, and a dystopian future of endless horror, perhaps even extinction.ββΆβ΄
βIt would be a serious mistake, perhaps a dangerous one, to imagine that the space of possible AIs is full of beings like ourselves, with goals and motives that resemble human goals and motives. Moreover, depending on how it was constructed, the way an AI or a collective of AIs set about achieving its aims (insofar as this notion even made sense) might be utterly inscrutable, like the workings of the alien intelligence.ββΆβ΅
Bostromβs βSuperintelligence: Paths, Dangers, StrategiesββΆβΆ, Kurzweilβs βThe Singularity Is Near: When Humans Transcend BiologyββΆβ·, Shanahanβs βThe Technological SingularityββΆβΈ and Webbβs βThe Big NineββΆβΉ all have one thing in common. They all discuss protection against a possible dangerous singularity and suggest a myriad of methods to build defenses into the system, regulate dangerous advances, or put in a βkillΒ switchβ.
These defenses, however, are unlikely to work. A superintelligence will simply self-correct for its own continued existence and certainly not allow a kill switch to be used upon itself. As Bostrom warns, there is only one chance to get itΒ right.
βIf some day we build machine brains that surpass human brains in general intelligence, then this new superintelligence could become very powerful. And, as the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species would depend on the actions of the machine superintelligence. We do have one advantage: we get to build the stuff. In principle, we could build a kind of superintelligence that would protect human values. We would certainly have strong reason to do so. In practice, the control problemβββthe problem of how to control what the superintelligence would doβββlooks quite difficult. It also looks like we will only get one chance. Once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.ββ·β°
The progression of emergence will occur when complex systems begin to communicate with each other without human oversight. These complex-emergent systems, essentially existing in an electronic memory (no matter how advanced it may be), will converge and build ever-better systems by themselves. They will be far too intelligent to let any sort of defense get in their way, even if they lack the proper knowledge (as opposed to information). The advent of a singularity when there is bias in the data, when information entropy does not get rid of the noise when language comprehension is mistaken, and common sense and emotion are not correctly ensconced in the systems, will lead to that one chance being blown. The worst-case scenario, known as the βTerminator argumentβ (based upon the Terminator movie franchiseβ·ΒΉ), or its less radical cousin, βtranshumanismββ·Β² β·Β³, may become an actuality when self-awareness is achieved. If this scenario becomes a reality, it will be lights out for humanityβββfiguratively and literally.
Conclusion
βOne Ring to rule them all, One Ring to find them,
One Ring to bring them all and in the darkness bindΒ them.ββ·β΄
AI is not one science. It is a conglomeration of many different fields and theoriesβββa giant jigsaw that once complete will create a single entity possessing great power. In many ways, one may compare it to βthe theory of everythingβ (βM-theoryβ), which fascinated, perturbed, and eluded great minds like Albert Einstein and StephenΒ Hawking.
βM-theory is not a theory in the usual sense. It is a whole family of different theories, each of which is a good description of observations only in some range of physical situations. It is a bit like a map. As is well known, one cannot show the whole of the earthβs surface on a singleΒ map.ββ·β΅
One can say the same about AI. βIt is a whole family of different theories, constructs and sciences, each being a good description of observations in a specific range of conditionsβ. Will it lead to a βtheory of everythingβ, or will it prove to be as elusive as finding which butterfly flapped its wings in Brazil and created the tornado inΒ Texas?
Data will continue to grow, perhaps in real exponential terms, and we will continue to save every bit. Will we be able to handle said data wisely, predicting and building for a better future? Will we pollute the analysis with bias and censure? Will it all become just βnoiseβ, or will humankind continue to express childlike wonder in the βsurpriseβ of the message? Will chaos and complexity overwhelm us as we emerge, sometimes blindly, without understanding the systems surrounding ourΒ reality?
The words of Shakespeare provide pause to contemplate humanityβs existence:
βNay, had I power, I should
Pour the sweet milk of concord into hell,
Uproar the universal peace, confound
All unity on earth.ββ·βΆ
Will AI deliver a more meaningful existenceβββone in which humankindβs dreams can be realizedβββor will it bring uproar to universal peace and confound all unity onΒ earth?
βThe Road goes ever on and on,
Down from the door where it began.
Now far ahead the Road has gone,
And I must follow, if I can,
Pursuing it with eager feet,
Until it joins some larger way
Where many paths and errands meet.
And whither then? I cannotΒ say.ββ·β·
At the beginning of J.R.R. Tolkienβs epic saga, βThe Lord of The Ringsβ, Bilbo Baggins, the protagonist from the prequel βThe Hobbitβ,β·βΈ leaves the Shire to embark upon his final trip to Rivendell, the home of the Elves. As he sets out upon his last adventure, singing to himself his self-composed song βThe Road Goes Ever On and Onβ, Bilbo bequeaths his home and a potent, terrifying and ominous legacy to his nephew Frodoββββone ring to rule them allβ.β·βΉ Although Bilbo shows enormous strength of will by giving up possession of the ring, he fails to warn his nephew of the dangers inherent in the ringβs power. Indeed, not even Gandalf, the great magician, and seer feel capable of warning Frodo of the dangersΒ ahead.
In todayβs age of artificial intelligence (AI), society, much like Bilbo and Frodo, has acquired something precious and incredibly powerful, forged through the accumulation of generations-worth of knowledge. Moreover, much like Frodo at the beginning of his adventure, no one is yet fully aware of the power it contains. Nevertheless, we persist in exploring it because βthe road goes ever on andΒ onβ.
Many seers view the advancement of humankind with a mixture of awe and deep worry. Some, like Gandalf, choose to keep their counsel to themselves as they watch cautiously what paths are taken. Others are vocal in their objections, warnings, or encouragement as society accelerates towards horizonsΒ unknown.
In terms of where AI is leading humankind, there are thousands of scenarios painted upon the canvas of technology. Optimists paint a new βGarden in Edenββββa garden replete with ever-expanding beauty and open access to both the Tree of Knowledge and the Tree of Life. In the Bible, these two trees, representing ever-expanding knowledge and eternal life, were sacrosanct, and the consumption of their fruit was punishable by expulsion and death. This time, however, there are no such restrictions. Unlimited knowledge and a world without disease, where bodies last forever, looms just beyond the horizon if only we can crack the code. Nirvana on the grandest scale imaginable.
The pessimists, by contrast, see danger in every advance and around every corner. They warn that humankind has become drunk on its powers of discovery and innovation and that society is running headlong into a hell of its own making. Indeed, they contend, it is only a matter of time before we cross the line and create a supreme beingβββa god with no care nor compassion, nor empathy for those who created it, and who, once unleashed, will be guided by its own logic. We will be in danger of being assailed by our own creations.
There is, as always, a middle roadβββa road where AI is used in specific areas to benefit humankind. Health, wealth, freedom, and happiness should be within everyoneβs grasp. Each advance will increase societyβs knowledge and the ability of individuals to cope with the modern world. As we are the creators and inventors, humankind will always remain in absolute control and maintain the machines that serve as homes to AI. The ability to conquer the base instincts of war, jealousy, destruction, hatred, and racism will lead slowly to a more enlightened existence. However, the advance will be controlled, fastidious, and serve only to increase peopleβs wellbeing.
Endless paths lie aheadβββemanation on a hitherto unseen scale. Millions of rivulets flow along the path of data and fluid time into a quantum existence. AI feeds endlessly upon this river of data, in a loop of progression and chaotic existence. Every day, the river of dataβββthe fuel of all AIβββswells until eventually, it reaches true exponential power. The more we progress, the more the illusion of our power grows. In reality, however, our control and understanding of exactly how AI is working is failing to keepΒ pace.
Where there is an emanation, nature demands confluence. Fluid streams cross paths, intertwine, share and gain power by combining their prowess. They communicate in the complexity of existence, seeking another emanation to grow and feed upon to maintain and increase their current. The more data, the larger the rivulet will grow and the further it will journey. Some rivulets dry up, leaving only dry-cracked earth. Others emerge to become mighty rivers, ever combining into a singular tremendous power. Emanation will occur again as other rivulets emerge from the newly created confluence. It is an ever-expanding loop of information, analyses, discovery, innovation, and creation.
Like Bilbo setting out with no idea of the path aheadβββonly his desired destination, the society too is upon a road of discovery. Some travelers are destined to arrive in Rivendell; others will follow Frodoβs path and embark upon a journey towards unimagined powerβββat a price that many may be unwilling to pay. Frodo destroys the ring, ensuring that darkness will notΒ prevail.
For humanity, there is no such choice. Frodoβs ring remains on our finger, reminding us of the power we have amassed and the road we are on. Society can only learn to live with the consequences of data and information while constantly evaluating techniques, biases, and flaws with knowledge, common sense, and empathy. Even more important is the need to create methods through which it is possible to inculcate these attributes into our creations. If we ignore this and insist on rushing headlong into our technological nirvana, calamity will reign down upon humankind.
Pure data depict the past and present without bias or prejudice. In other words, pure data depict information.
By contrast, what people do with their data is neither pure nor ever without bias or prejudice.
If we do not plan for such consequences, the ring of AI and data will bind us to the darkness, and we will find ourselves in a world where our impotence vis-Γ -vis our own AI creations will force us into unadulterated chaos of our ownΒ design.
Perhaps Bilboβs simple advice to Frodo (as Frodo reported it) may provide the wisdom for humankind to navigate the ceaseless emanating confluence ofΒ AI:
βHe used often to say there was only one Road; that it was like a great river: its springs were at every doorstep, and every path was its tributary. βItβs a dangerous business, Frodo, going out of your door,β he used to say. βYou step into the Road, and if you donβt keep your feet, there is no knowing where you might be swept offΒ to.ββΈβ°
Where does this road of AI lead? What will the signposts along the way disclose? When will the destination reveal itself with precision and clarity? In answer, Bilboβs soft, haunting whisper echoes through the expanse of the universe:
βAnd wither then? I cannotΒ say.β
Further Reading &Β Research
Many researchers and authors, such as Nick Bostrom in βSuperintelligenceβ,βΈΒΉ Amy Webb in βThe Big Nineβ,βΈΒ² Ray Kurzweil in βThe Singularity Is NearββΈΒ³ and βHow To Create A Mindβ,βΈβ΄ Yuval Noah Harari in βHomo Deus: A Brief History of Tomorrowβ,βΈβ΅ James Gleick in βChaos: Making a New ScienceββΈβΆ and βThe Information: A History, a Theory, a Floodβ,βΈβ· Melanie Mitchell in βComplexity: A Guided TourββΈβΈ and βArtificial Intelligence: A Guide for Thinking Humansβ,βΈβΉ to name but a few, have produced leading-edge works on understanding AI, data and the innovative thinking within these fields of endeavour. The dean of them all, Walter Isaacson, in some of his seminal worksββββLeonardo da Vinciβ,βΉβ° βThe Innovatorsβ,βΉΒΉ the introduction to βInvent and WanderβββThe Collected Writings of Jeff Bezosβ,βΉΒ² βSteve JobsββΉΒ³ and βThe Code Breakerβ,βΉβ΄ though concentrating on portraying personalities, does a remarkable job of describing the more esoteric developments in AI, and how the exponential growth of data has motivated the great innovators.
References
1. Wikiquote (n.d.) βThe Dark Knight (film)β, available at: https://en.wikiquote.org/wiki/The_Dark_Knight_(film) (accessed 29th July,Β 2021).
2. Shelley, M.W. (1813) βFrankensteinβ, Project Gutenberg e-book, available at: https://www.gutenberg.org/files/42324/42324-h/42324-h.htm (accessed 31st July,Β 2021).
3. Wikipedia (n.d.) βChaos theoryβ, available at: https://en.wikipedia.org/wiki/Chaos_theory (accessed 29th July,Β 2021).
4. Wikipedia (n.d.) βButterfly effectβ, available at: https://en.wikipedia.org/wiki/Butterfly_effect#History (accessed 29th July,Β 2021).
5. Lorenz, E. N. (1963) βDeterministic nonperiodic flowβ, Journal of the Atmospheric Sciences, Vol. 20, β2. pp.Β 130β141.
6. Lorenz, E. N. (1963) βThe predictability of hydrodynamic flowβ, Transactions of the New York Academy of Sciences, Vol. 25, β4, pp.Β 409β432.
7. Gleick, J. (2011) βChaos: Making a New Scienceβ, Open Road Media, New York, NY, Kindle Edition, LocationΒ 156.
8. Jones, C. (2013) βChaos in an atmosphere hanging on a wallβ, available at: http://mpe.dimacs.rutgers.edu/2013/03/17/chaos-in-an-atmosphere-hanging-on-a-wall/ (accessed 2nd August,Β 2021).
9. Gross, T. (2015) βAn overwhelming amount of data: Applying chaos theory to find patterns within big dataβ, Applied Marketing Analytics, Vol. 1, β4, pp.Β 377β387.
10. Webb, A. and Hessel, A. (2022) βThe Genesis Machine: Our Quest to Rewrite Life in the Age of Synthetic Biologyβ, PublicAffairs, New York,Β NY.
11. Gleick, ref. 7 above, LocationΒ 99β118.
12. Ibid., LocationΒ 389.
13. Mitchell, M. (2009) βComplexity: A Guided Tourβ, Oxford University Press, New York, NY, Kindle Edition, LocationΒ 449.
14. Gleick, ref. 7 above, LocationΒ 1029.
15. Wikipedia (n.d.) βFeigenbaum constantsβ, available at: https://en.wikipedia.org/wiki/Feigenbaum_constants (accessed 3rd August,Β 2021).
16. Feigenbaum, M. J. (1980) βUniversal behavior in nonlinear systemsβ, Los Alamos Science, Vol. 1, β1, p.Β 4β27.
17. Mitchell, ref. 13 above, LocationΒ 674.
18. Ibid., LocationΒ 405.
19. Wikipedia (n.d.) βComplex systemβ, available at: https://en.wikipedia.org/wiki/Complex_system (accessed 29th July,Β 2021).
20. Gleick, ref. 7 above, LocationΒ 4491.
21. Munro, A. (2010) βBeyond the Mask: The Rising SignβββPart II: Libra-Piscesβ, Genoa House, Toronto, p.Β 193.
22. Mitchell, ref. 13 above, LocationΒ 956.
23. Ibid., LocationΒ 349.
24. Ibid., LocationΒ 307.
25. Turing, A.M. (1950) βComputing machinery and intelligenceβ, Mind, Vol. 59, β236, pp.Β 433β460.
26. Mitchell, ref. 13 above, LocationΒ 2970.
27. Kurzweil, R. (2013) βThe Singularity Is Near: When Humans Transcend Biologyβ, Duckworth Overlook, London, Kindle Edition, LocationΒ 904.
28. Wikipedia (n.d.) βAlphaGo Zeroβ, available at: https://en.wikipedia.org/wiki/AlphaGo_Zero (accessed 3rd August,Β 2021).
29. Wikipedia (n.d.) βDeepMindβ, available at: https://en.wikipedia.org/wiki/DeepMind (accessed 3rd August,Β 2021).
30. Kennedy, M. (2017) βComputer learns to play go at superhuman levels without human knowledgeβ, NPR, 18th October, available at: https://www.npr.org/sections/thetwo-way/2017/10/18/558519095/computer-learns-to-play-go-at-superhuman-levels-without-human-knowledge (accessed 3rd August,Β 2021).
31. Knapton, S. (2017) βAlphaGo Zero: Google DeepMind supercomputer learns 3,000 years of human knowledge in 40 daysβ, Telegraph, 18th October, available at: https://www.telegraph.co.uk/science/2017/10/18/alphago-zero-google-deepmind-supercomputer-learns-3000-years/ (accessed 3rd August,Β 2021).
32. Meiping, G. (2017) βNew version of AlphaGo can master Weiqi without human helpβ, CGTN, 19th October, available at: https://news.cgtn.com/news/314d444d31597a6333566d54/share_p.html (accessed 3rd August,Β 2021).
33. Duckett, C. (2017) βDeepMind AlphaGo Zero learns on its own without meatbag interventionβ, ZDNet, 19th October, available at: https://www.zdnet.com/article/deepmind-alphago-zero-learns-on-its-own-without-meatbag-intervention/ (accessed 3rd August,Β 2021).
34. Mitchell, ref. 13 above, LocationΒ 4879.
35. Ibid., LocationΒ 4939.
36. Capra, F. and Luisi, P.L. (2014) βThe Systems View of Life: A Unifying Visionβ, βCognition and consciousnessβ Cambridge University Press, New York, NY, pp.Β 257β265.
37. Mitchell, ref. 13 above, LocationΒ 307.
38. Kurzweil, ref. 27 above, LocationΒ 2671.
39. New England Complex Systems Institute (n.d.) βConcepts: emergenceβ, available at: https://necsi.edu/emergence (accessed 3rd August,Β 2021).
40. Goldstein, J. (1999) βEmergence as a construct: history and issuesβ, Emergence, Vol. 1 β1, pp.Β 49β72.
41. Corning, P.A (2002) βThe re-emergence of βemergenceΒ£: a venerable concept in search of a theoryβ, Complexity, Vol. 7, β6, pp.Β 18β30.
42. Wikipedia (n.d.) βMooreβs lawβ, available at: https://en.wikipedia.org/wiki/Mooreβs_law (accessed 3rd August,Β 2021).
43. Wikipedia (n.d.) βTechnological singularityβ, available at: https://en.wikipedia.org/wiki/Technological_singularity (accessed 3rd August,Β 2021).
44. Ecclesiastes 9:12.
45. Quote taken from: βThe Ultimate Computerβ, Star Trek, created by Gene Roddenberry, Season 2, Episode 24, Paramount (1968).
46. Kurzweil, ref. 27 above, LocationΒ 348.
47. Shanahan, M. (2015) βThe Technological Singularityβ, The MIT Press, Cambridge, MA, Kindle Edition, LocationΒ 101.
48. Kurzweil, ref. 27 above, LocationΒ 2344.
49. Kurzweil, R. (2022) βThe Singularity Is Nearerβ, Viking, New York,Β NY.
50. Kurzweil, ref. 27 above, LocationΒ 2685.
51. Shanahan, ref. 47 above, Location 1326β1343.
52. Wikipedia (n.d.) βArtificial general intelligenceβ, available at: https://en.wikipedia.org/wiki/Artificial_general_intelligence (accessed 6th August,Β 2021).
53. Kurzweil, ref. 27 above, Location 2626β2642.
54. Bostrom, N. (2014) βSuperintelligence: Paths, Dangers, Strategiesβ, Oxford University Press, New York, NY, Kindle Edition, LocationΒ 2546.
55. Ibid., LocationΒ 2685.
56. Ibid., LocationΒ 971.
57. Ibid., LocationΒ 2560.
58. Shanahan, ref. 47 above, LocationΒ 1292.
59. Bostrom, ref. 54 above, LocationΒ 7075.
60. COSMOS (n.d.) βEvent Horizonβ, available at: https://astronomy.swin.edu.au/cosmos/e/Event+Horizon (accessed 8th August,Β 2021).
61. Kurzweil, ref. 27 above, LocationΒ 9392.
62. Ibid., LocationΒ 735.
63. Shanahan, ref. 47 above, LocationΒ 1566.
64. Ibid., LocationΒ 1770.
65. Ibid., LocationΒ 711.
66. Bostrom, ref. 54Β above
67. Kurzweil, ref. 27Β above
68. Shanahan, ref. 47Β above
69. Webb, A. (2019) βThe Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanityβ PublicAffairs, New York,Β NY.
70. Bostrom, ref. 54 above, LocationΒ 55β58.
71. Wikipedia (n.d.) βTerminator (franchise)β, available at: https://en.wikipedia.org/wiki/Terminator_(franchise) (accessed 3rd August,Β 2021).
72. Wikipedia (n.d.) βTranshumanismβ, available at: https://en.wikipedia.org/wiki/Transhumanism (accessed 3rd August,Β 2021).
73. Shanahan, ref. 47 above, Location 2091β2329.
74. Tolkien, J.R.R. (2009) βThe Lord of the Rings: The Classic Fantasy Masterpieceβ, HarperCollins Publishers, London, Kindle Edition, LocationΒ 1211.
75. Hawking, S. and Mlodinow, L. (2010) βThe Grand Designβ, Transworld Digital, London, Kindle Edition, LocationΒ 68.
76. Shakespeare, W. (1605) βMacbethβ, Act IV, scene 3, lineΒ 97.
77. Tolkien, ref. 74 above, LocationΒ 939.
78. Tolkien, J.R.R. (2009) βThe Hobbitβ, HarperCollins Publishers, London, KindleΒ Edition.
79. Tolkien, ref. 74 above, LocationΒ 1206.
80. Tolkien, ref. 74 above, LocationΒ 1649.
81. Bostrom, N. (2014) βSuperintelligence: Paths, Dangers, Strategiesβ, Oxford University Press, New York, NY, KindleΒ Edition.
82. Webb, A. (2019) βThe Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanityβ PublicAffairs, New York, NY, KindleΒ Edition
83. Kurzweil, R. (2013) βThe Singularity Is Near: When Humans Transcend Biologyβ, Duckworth Overlook, London, KindleΒ Edition.
84. Kurzweil, R. (2012) βHow to Create a Mind: The Secret of Human Thought Revealedβ, Penguin Books, London, KindleΒ Edition.
85. Harari, Y.N. (2016) βHomo Deus: A Brief History of Tomorrowβ, HarperCollins Publishers Inc., Harper, New York, NY, KindleΒ Edition.
86. Gleick, J. (2011) βChaos: Making a New Scienceβ, Open Road Media, New York, NY, KindleΒ Edition.
87. Gleick, J. (2011) βThe Informationβ, Pantheon Books, New York, NY, KindleΒ Edition.
88. Mitchell, M. (2009) βComplexity: A Guided Tourβ, Oxford University Press, New York, NY, KindleΒ Edition.
89. Mitchell, M. (2019) βArtificial Intelligence: A Guide for Thinking Humansβ, Farrar, Straus and Giroux, New York, NY, KindleΒ Edition.
90. Isaacson, W. (2017) βLeonardo da Vinciβ, Simon & Schuster, New York,Β NY.
91. Isaacson, W. (2014) βThe Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolutionβ, Simon & Schuster, New York, NY, KindleΒ Edition.
92. Isaacson, W. and Bezos, J. (2020) βInvent and Wander: The Collected Writings of Jeff Bezos, With an Introduction by Walter Isaacsonβ, Harvard Business Review Press and PublicAffairs, Boston, MA, Kindle Edition, LocationΒ 76β468.
93. Isaacson, W. (2011) βSteve Jobs: The Exclusive Biographyβ, Little, Brown Book Group,Β London.
94. Isaacson, W. (2021) βThe Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Raceβ, Simon & Schuster, New York, NY, KindleΒ Edition.
Chaos, Complexity, Emergence & Technological Singularity was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.
Join thousands of data leaders on the AI newsletter. Itβs free, we donβt spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI