Modeling Reality and Envisioning the Future

How humans work, how they interact with their environment, and what they need to do in order to fulfill their collective needs.

Afika Nyati
45 min readApr 2, 2017

I have found myself thinking a lot about death lately. I know, what a morbid topic for a young adult to preoccupy his waking thoughts with. “Why are you thinking about death? You still have life ahead of you. You go to MIT, you have a loving family, you’ll never have to think about money! You live the life of the one percent.” Don’t get me wrong, I acknowledge and appreciate my privileged position, but a lot of people seem to be missing the point. Moreover, the more time I’ve devoted to pondering said point, the more I’ve become disillusioned and disheartened by the civilization into which I was born, and as this think piece unfolds, I hope at a minimum to help you understand my grievances, if not to persuade you towards similar outlooks. I am I suicidal? No. On the contrary, I love life, and it’s this interminable love that gave rise to my fear of death and ignited the onset of existential thoughts that become the basis for what I will share in this think piece.

Be warned, this think piece is incomplete, it’s overoptimistic about human nature, and is likely fraught with many inaccuracies, technical inconsistencies, and assumptions about how the world works. But nonetheless, I’ve felt the urge to translate the mental dialogue I’ve had since I left high school into a physical record, if not for the purpose of starting a conversation, then for therapeutic exercise inherent in the process. “But what is this think piece about?” I’ve had trouble articulating this to my peers recently, but if I had to give a one-line summary, it would best be summarized as an attempt at constructing a model of physical reality and a proposal for steering humanity towards some best-case utopian future. This is a bold and somewhat arrogant undertaking; this I understand. But barring billionaire Elon Musk, I see few denizens of our planet actively attempting to deter humanity from precipitating its imminent self-inflicted extinction. I just want to make my contribution using the means I currently have — my knowledge of the current and prospective technological landscape and my voice. But before getting to the heart of the matter, I’m going to dial back a bit and give you some context on human civilization and present how our history and biological evolution has lead to our present-day society. We often overlook these details or classify them as fixed conditions, but I want to convince you otherwise.

The Universe

We live on a tiny rock in a vast universe. So vast that it’s immensity at times is inconceivably hard to comprehend. Few people barring the late astrophysicist Carl Sagan have been able to articulate this so adeptly. For the sake of refreshing your memory and re-igniting your childhood wonder, here’s an excerpt from Sagan’s book ‘Pale Blue Dot’ voiced over a pretty curation of photographs and video of Earth-related subjects:

What a humbling truth indeed, and to add fuel to the fire, recent theories in Physics posit that our universe could be one of many universes present within an enormous landscape of possible universes — a universe within a strictly increasing set of universes if you will. Furthermore, while we live in universe governed by the spacetime continuum, a universe composed of three-dimensional matter that exists through the passage of time, in some circles it is believed that there could be as many as ten dimensions; however as far as we know, we are eternally trapped to live within three of these theorized dimensions.

While it’s “fun” to reflect on the sheer magnitude of the universe (of universes and other possible higher-dimensional spaces), for the sake of staying on topic let’s bound the scope of our discussion to the three-dimensional spacetime continuum, i.e. the three-dimensional matter existing through the passage of time, and focus on the scale of interest — the human scale.

As far as physical science is concerned, the universe began 13.8 billion years ago with the Big Bang, which triggered the expansion of the universe.

“After the initial expansion, the universe cooled sufficiently to allow the formation of subatomic particles, and later simple atoms. Giant clouds of these primordial elements later coalesced through gravity in halos of dark matter, eventually forming the stars and galaxies visible today.”

- Wikipedia, Big Bang, Wikimedia Foundation, Published: March 26, 2017. Date Accessed: March 28, 2017.

Humans

However, we’re not really concerned much with the time period between the Big Bang and the formation of the stars, galaxies, and planets; we’re more concerned with the time period beginning from the dawn of the modern homo sapien to present day human civilization. Nature has had billions of years to refine the human species through a phenomenon know as survival of the fittest, which essentially favors organisms that best adapt to their environment. If one were to anthropomorphize nature into a designer, survival of the fittest would essentially be nature iterating on its ideas, refining those prototypes that best achieve it’s objective function and abandoning those that don’t. For those not familiar with the concept of an objective function, here’s a brief description:

Objective Function / əbˈdʒɛk.tɪv ˈfʌŋkʃən / compound noun 1. A mathematical equation that describes the output target that corresponds to the maximization of a particular goal or objective.

For nature this goal is survival. So in essence, one can envision the process of evolution as nature exploring and creating organisms from an unbounded option space, but using the survival measure of an organism as an optimization instrument to heuristically traverse this infinite option space with the aim of converging onto some optimal design that fully maximizes its desired objective function. I bring this point up to highlight the fact that such a goal might take forever to achieve, especially through a partially stochastic (randomized) process such as evolution. As a result, we have reason to believe the homo sapien in its current form is in no way an optimal design. Furthermore, as a consequence of the emergence of consciousness, humans have developed their own set of desires that weren’t initially factored into the original objective function set by nature — desires such as love, esteem, and self-actualization. For this reason, humans today are the product of an objective function that by our present standards is underspecified. Indeed, an objective function built on survival is not suitable for satisfying the needs of a 21st century human. This is a problem; we need to reconcile our objective function with our higher-level needs.

Abraham Maslow, a psychologist from the humanistic school of Psychology proposed the following hierarchy that a governed the needs of a fully-functioning human:

The hierarchy is portrayed in the shape of a pyramid with the largest, most fundamental needs at the bottom and the need for self-actualization at the top. Maslow posits that the most basic level of needs must be met before the individual will strongly desire (or focus motivation upon) higher-level needs. Thus, it is unlikely for a human to exercise creativity if they are unable to satiate their basic food necessities. I will refer back to this hierarchy of needs as the think piece develops.

Before continuing on with mutual human goals and objectives, lets take some time to critically analyze exactly what a human is, and how humans relate to each other and their environment. Thereupon, we will discuss the implications of the these conclusions and their relation to common human goals and objectives.

Fundamentally, a human is an organism or physical agent that acts in accordance to its objective function in the physical environment — the objective function in basic form being survival, and the physical environment being Earth in three-dimensional space. Humans are equip with a body and a mind; the purpose of the body is to interact with the physical environment and to sense physical stimuli to be sent to the mind. A human is equip with fives senses:

  1. Visual perception: enables a human to map visible light into a visual representation of its position in the physical environment.
  2. Auditory perception: enables a human to perceive sounds in the physical environment by detecting vibrations in the air.
  3. Olfactory perception: enables a human to smell objects in the physical environment.
  4. Somatosensory perception: enables a human to perceive touch and its position and movement in the physical environment.
  5. Gustatory perception: enables a human to taste objects in the physical environment.

The mind, the software of the brain, is able to map physical stimuli received from the body’s five senses into a mental construction of physical reality. “So our senses allow our mind to perceive the physical environment. That’s great! But what good is it if humans are essentially machines that can sense and perceive their environment only?” That’s where limbs come into play. Physical appendages, i.e. arms and legs, enable a human to traverse physical space through any combination of the three dimensional axes and allow it to physically interact with other agents and objects in the physical environment through sending actuation signals from its brains to its limbs.

Aside from perception, the brain is also able to store information about the physical environment in the form of mental contents. This is advantageous because the ability to recall locations and patterns in the physical environment allow one to learn and make informed judgements on known phenomena based on prior phenomena or inferences about new phenomena based on similar prior phenomena. This is the basis of knowledge. As a result, humans are able to store knowledge, a critical prerequisite for pattern recognition and long-term planning. These are behaviors that have set humans apart from other animal species.

Let’s consider the following scenario for a second: digital computers are powerful machines. The fastest supercomputer known to man is able to process data at a speed of 8.2 billion megaflops and can store 30 quadrillion bytes of data, allowing such a machine and others like it unparalleled computing power. But consider for a second how restricted such a tool would be without the existence of the internet. All results and computations produced by such computers would be underutilized if the information could not be easily disseminated. This is what the human mind would be without language.

Language

Language / ˈlaNGɡwij / noun 1. The method of human communication, either spoken or written, consisting of the use of words in a structured and conventional way.

Language gives humans the ability to share information from one brain to another. This transforms what is initially an isolated agent into a complex network of agents that are able to share information gathered from their physical environment or deduced logically from prior knowledge. This allowed historic homo sapiens to signal to other humans imminent danger or to convey novel information about the whereabouts of food, shelter, and other desired resources. Language as a tool has seen its own evolution, starting from its early days as body language to its mature complex manifestation today. This evolution was a necessary change for the most part because of the lack of descriptive potential provided by body language. With spoken language we were provided the faculty to communicate more efficiently, and in more granular detail.

The two drawbacks of spoken language as a primary method of information transfer and dissemination were firstly that information was often subject to alteration as information propagated outwards from its source down every information recipient path, a path composed of a sequence of edges (representing information transfers) connecting a sequence of distinct vertices (representing human minds) together. As information propagated outwards, errors, mistranslations, and loss of information accumulated much like in the children’s game, Broken Telephone. Secondly, there seemed to be a information dissemination limit controlling how far one’s information could travel in both space and time. In the space domain, the dissemination potential of information was governed by the size of the union set comprised of all the unique humans involved in a recursive traversal of an information source’s human network. In simpler terms, the information known by all the humans in the information source’s network, aggregated with all the humans in these humans’ networks, ad infinitum. For the most part, this was geography-limited, so information became geographically-trapped. In the time domain, information could not be transferred from the dead to the living, and as a result the likelihood of successful information transfers was highest during the living moments of the information source. What was humanity’s solution to these problems? The transcription of spoken to written language.

As amusing as this might sound, written language in the form of cave carvings and books gives dead people the ability to communicate information to people from the future. We often overlook this fact, but for most of history, the aggregated knowledge base of all living humans likely reduced in size when people died, requiring the living to reacquire lost knowledge. “If nature is trying to optimize survival, then why do we die?” Death and reproduction are mechanisms used by nature to improve the survival potential of a species as a whole, as opposed to the survival potential of singular member of a species. In this way, malformations can be removed, and resistances to bacteria and other advantageous physical attributes can be passed on to future generations to ensure overall survival.

We’ve examined a lot of information thus far; let’s have a quick intermission and review what we’ve learnt so far. Humans are the outcome of an evolutionary process that has transpired over thousands of years. In this time humans were equip with many faculties that improved their survival potential. As hinted in the last sentence, first and foremost, humans have had the innate desire to survive; survival was and is the foundation on which all our other needs are built upon. However, with the emergence of consciousness, humans developed other needs such as love, esteem, and self-actualization. These needs were not factored in evolution’s objective function, and as a result there was and still is a mismatch between nature’s objective function for humans and modern humans’ desired objective function.

Over the course of their lengthy evolutionary process, humans were equip with the ability to see, hear, taste, smell, and touch their environment. They also developed the ability to save, recognize, and recall information in the form of mental contents, which made them proficient pattern recognition and long-term planning beasts. Furthermore, with the innovation of written and spoken language, humanity become an efficient network of knowledge repositories that could disseminate information through space and time. The human body became a fairly complex system in comparison to most life. It’s likely one might have predicted a thriving future for this species, however one would have failed to take into account crucial information variable in its success, namely the influence of the physical environment on their future.

The Environment

Resource / ˈrēˌsôrs rəˈsôrs / noun 1. A stock or supply of money, materials, staff, and other assets that can be drawn on by a person or organization in order to function effectively. 1.1. A country’s collective means of supporting itself or becoming wealthier, as represented by its reserves of minerals, land, and other natural assets. 1.2. A source of help or information. 2. Economics Land, labor, capital, and entrepreneurship/novel ideas.

Scarcity / ˈskersədē / noun 1. the state of being scarce or in short supply; shortage.

Ownership / ˈōnərˌSHip / noun 1. the act, state, or right of possessing something.

Since the dawn of life on earth, humans and all other living organisms have been at an all out battle for survival. Energy is the universe’s absolute currency, and all life has been in a never-ending battle to attain it. But why battle for energy? You will recall a principle you once learnt in high school, namely the law of Conservation of Energy. It states: energy cannot be created or destroyed, but can be altered from one form to another. Consequently, there is a finite amount of useful energy in the universe and on earth; so us organisms have been forced to battle for this finite energy.

At some point in our evolution, our human lineage evolved from a microscopic organism to a macroscopic one. Because of this transformation, we were able to effortlessly secure energy from all life significantly smaller and less adapted than us, in what scientists dubbed the survival of the fittest, i.e. the organism that best adapted to its environment was the organism that passed on its genes to successive generations. When we became macroscopic, it no longer made sense to deal in measures of energy. Thus we switched to measures of resources.

Resources as explained above are all commodities that assist humans in satisfying their needs. They are all to some extent derivatives of energy, the absolute currency, however resources are a better measurement system due to their discrete and macroscopic nature. Economics posits that resources can be classified as land, labor, capital, and entrepreneurship/novel ideas. Because all resources are essentially derivatives of energy, it’s sensible to observe that these resource classifications to some extent are all interchangeable; you can pick an arbitrary choice of two of the classifications and successfully find a way to exchange one for the other. For the sake of simplicity, I will reduce these classifications to physical resources — land, labor, capital — and abstract resources — entrepreneurship/novel ideas, which I will reclassify as information/knowledge. These classifications make more sense in the context of my model, so they will be favored; however, the reclassifications will be justified in the ensuing paragraphs. But before justifying these reclassifications, I think it’s necessary to explore why the field of economics exists in the first place.

Economics exists because of the scarce nature of resources on Earth. Because there is a limited supply of most resources, we are unable to acquire an abundant supply of resources. This is simply a corollary of the Conservation of Energy. At some point in human history, humans lived nomadic hunter-gatherer lives where small groups of humans with no permanent abode would hunt for food and gather flora. Because these groups remained relatively small in size, resource distribution was never an issue. It was only once we made the transition to settled civilization, agriculture, and animal domestication did we need to develop some form of economics to manage this distribution process. In addition, as these groups slowly grew, specialization of skill and exchange of resources resulted in higher yields and overall efficiency. But due to the reality of living in a world marred by scarcity, it made no sense to trade a resource without attaining some return. This lead to the formation of the bartering system, the earliest exchange system for resources.

The issue with the barter system was that there was no universally agreed upon exchange arrangement — certain resources were more discretizable/granular than other resources, which led to uneven exchanges. There needed to be an intermediary that could discretize any resource into some mutually-owned commodity. Initially, gold occupied this role, but over time civilization made a shift from commodity money to representative money — a claim on a commodity using a medium of exchange that has no value in and of itself. Another significant benefit of representative money was that an exchange for one resource yielded a versatile return, so cases where one party didn’t desire the resource owned by the other party would be evaded. Consequently, old farmer Tom could trade his esteemed cow Maryanne for money instead of settling for a resource of no use to him.

So human civilization was now composed of large settled groups of humans focused on contributing to their community by collecting, producing, or providing a single resource that would earn them financial resources to spend on desired resources to satisfy their personal needs. A consequence of such a society was the concept of ownership. Because resources were traded for other resources, naturally this resulted in a society that advocated and perpetuated ownership of property over the sharing of resources. Where a concept such as “owning” exists, concepts such as “loss” and “theft” exist too. This is the basis on which theft is built. If one owned property and it wasn’t acquired by a second party through mutually-consented conditions, one’s property was by definition stolen. I highlight this to show that loss and theft are consequences of ownership, and as a result cannot exist without it. But ownership is a by-product of scarcity, therefore it must follow that there is no such thing as loss or theft without scarcity. “But there’s no way you can eliminate scarcity”. Don’t jump to conclusions, all will be revealed soon young padawan

Information

Information / ˌinfərˈmāSH(ə)n / noun 1. Facts provided or learned about something or someone: a vital piece of information. 2. Computing data as processed, stored, or transmitted by a computer.

Storage / ˈstôrij / noun 1. The action or method of storing something for future use. 1.1. The retention of retrievable data on a computer or other electronic system; memory. 1.2. Space available for storing something, especially allocated space in a warehouse.

I’d like to take a quick detour to talk about information for a moment. Since the invention of the digital computer during the 20th century, the processes through which we consume and produce information have changed considerably. What used to be stored physically in books is now stored digitally on computers. What was once written and transcribed onto paper via some writing utensil or a typewriter is now typed into digital documents, compiled into machine language, and saved onto the hard drive of a computer. This shift is so paramount that historians are calling this period in our history the Information Age.

Why did this shift occurred? Simply put, digitized documents save physical space, allowing humans to store orders of magnitude more information than before; a library of physical books can be stored as digital documents on a computer hard disk, which takes up negligible space relative to the size of a library. Moreover, what was once perishable over time, now can exist indefinitely through time. Physical books if maintained well can last a few milleniums, but digital information lasts as long as we are still able to use computers. If all pans out well for our kind, this information would persist through to the end of time. Digital documents also offer mutability; it’s a lot easier to modify digital text as opposed to physical text. Finally, digital information is a lot easier and faster to retrieve and send through space. As long as a computer is connected to the internet, a user has the ability to access what is essentially humanity’s aggregated knowledge repository and has the ability to send information across the world at the speed of light, as an upper limit of course. There is no need to spend resources on printing books and the waiting time for acquiring the information is negligible. Not to mention all this can be done while in the comfort of your own home. If I must be honest, I find it amazing that we haven’t made a complete shift to digital information yet.

Alright, let’s take a step back; I’ve given you a lot of information so far. I want to formalize this idea of information, by elaborating on what I call the information ecosystem:

Information Ecosystem / infərˈmāSH(ə)n ˈēkōˌsistəm / compound noun 1. The aggregate collection of all types of information repositories. Information repositories can be classified as either mental contents, information stored in a humans brain, or transactive memory, a process by which information is stored external to the brain in physical articles.

Humans have had to use physical articles as mnemonic devices due to the brain’s inability to retain and efficiently recall all the information humanity has amassed. Some examples of mnemonic articles are physical books, audiobooks, digital documents stored on one’s computer hard disk, and the internet. The main difference between mental contents and transitive memory has to do with the latency before the information can be used. Mental contents are the only information type that can be readily used. Transactive memory has to undergo a conversion into mental contents, which is drawn out by the time it takes to read and understand the information. Together, mental contents summed with transactive memory comprise a human’s complete accessible knowledge base.

So there you have it, a summary of how exactly we got here. A complex interaction of all the conditions described mixed with a trace of randomness are the contributing factors that lead to present day society. This leads me back to the original sentiments of disillusionment I expressed at the beginning of this think piece how they relate to this “point” I have yet to explain. I hope after all this information, the picture is starting to gain some resolution.

The Point

Ephemeral / əˈfem(ə)rəl / adjective 1. Lasting for a very short time.

Before jumping into the essence of this think piece, it might be beneficial to explain what set me down the path of introspection that culminated in this written body. You see, I like certainty. I also love life. But here’s the catch— life is ephemeral and no one is certain about what happens once it ends. That made me sad. Up until the age of eighteen, I kept this truth buried in the deep abyss of my conscious thoughts, much like everyone else does. Life was good. However, it didn’t stay buried for long; it propagated back up like a bubble of air determined to resurface from its water imprisonment.

Like most of you, at the age of eighteen I was immediately expected to discover my life’s purpose and pursue it over the course of the rest of my life, so I did what was only sensible — set about completing this stressful task. What followed was an intense reflective period during which I thought deeply about my life, my interests, and society at large. What were my fundamental interests? What were my strengths? What will the future look like? and how could I contribute to its betterment? I quickly ran into problem. I liked a lot of things, I had many strengths, and I did not know with certainty what the future had in store. I liked to paint. I liked to build. I liked to compose music. In essence, I was and still am drawn to any process through which knowledge and creative intervention result in the emergence of some novel product. You could literally copy-paste that onto a plethora of present day professions. The future was also likely to spawn a slew of other exciting professions that would pique my interest.

Technology / ˌtekˈnäləjē / noun 1. The application of scientific knowledge for practical purposes, especially in industry. 1.1. Machinery and equipment developed from the application of scientific knowledge. 1.2. the branch of knowledge dealing with engineering or applied sciences.

This led me to obsess over the concept of time, hoping one day that humans would be able extend it, so that one day I could do everything I’d want to do in my lifetime. It was the juxtaposition of my desire to do it all with the realization of time’s ephemerality and the prospects of new professions emerging from technological advancements that set me on a cascading search for truth and a solution to my woes.

“But what is this truth and that point or whatever you want to call this epiphany?” I came to realize that technology is a powerful tool. It is the most powerful tool in humanity’s arsenal. It’s the only explanation a feeble species like ours was able to reach the top of the animal food chain. But here’s the thing, technology alone does not provide the key; it is merely a tool used to achieve our goals. These goals are conceived through the interaction between our needs and the conditions imposed by the physical environment of modern society. I’ve already outlined in detail human needs - these are summarized in Maslow’s Hierarchy of Needs. But what about modern society? In my opinion, I think it is flawed and is in need of radical transformation. We live in a world of individualism, rampant hoarding, and excess consumption, all fueled by the pursuit of wealth often at the expense of others.

Zero-Sum Game / ˈzirō səm ɡām / compound noun 1. A mathematical representation of a situation in which each participant’s gain or loss of utility is exactly balanced by the losses or gains of the utility of the other participants. If the total gains of the participants are added up and the total losses are subtracted, they will sum to zero.

We live in a zero-sum society where the gains of few are at the cost of many, and if it’s not at the expense of people, it’s at the expense of the natural environment. This is not our fault. We are simply optimizing modern society’s utility function; the more financial wealth you attain, the more influence you gain in society and the closer you get to achieving self-actualization on the hierarchy of needs. There are no terms in the function that penalize you for damaging the physical environment, negatively affecting the lives of others indirectly, or over-consuming. More so, financial wealth can be inherited through time, so it makes sense to strive to arbitrarily maximize one’s financial wealth so that one’s progeny may reap the benefits too and focus on self-actualization needs in their lifetime instead of concerning themselves with basic ones. The problem with inheritance is that it often presents its beneficiaries power in the absence of wisdom. It’s essentially entrusting a child (in wisdom) the influence to make decisions that could be significant enough to make a lasting impact on humanity’s eventual trajectory.

Education, the acquisition of information, seems to be the only way to succeed if one is born into unfavorable circumstances. The problem with education is that it requires considerable activation energy, a threshold of resources, to entertain. It also requires time to acquire. Converting transactive memory into mental contents is a prolonged process. This needs to be done all while satisfying all your basic needs, because as Maslow remarked, higher-level needs cannot be easily met unless lower-level needs are satisfied. All this needs to be achieved before you die, because nature’s objective function does not factor in knowledge acquisition and transfer as essential for survival. As a result, one must acquire, act on, and accumulate financial wealth all while satisfying their basic needs.

“What happens when groups are unable to satisfy their needs?” The system breaks. If people cannot satisfy their basic needs, they succumb to theft and if nations cannot satisfy their basic needs, they go to war. You see, most wars have been fought for resources, albeit under altruistic pretenses.

“The only incentive on a practical level to go to war is to acquire resources, and in the United States’ case it frequently is either energy resources, shall I say supporting political alliances to preserve access to energy resources.”

- Dylan Ratigan, The Choice Is Ours (2016)

It all leads back to the fact that we live in a world fraught with scarcity. While we are able to organize ourselves and collectively work to drive the machine when our needs are met, as soon as we lack resources we regress to a survival of the fittest mentality and a fight for available resources.

“So why is this the case then?” After running a root cause analysis on modern civilization, I found the following issues to be the fundamental cause of humanity’s problems:

  1. Scarcity
  2. Individualism and inefficient use of resources due to hoarding and excess consumption
  3. Perpetuation of outmoded processes and ways of thinking
  4. Inefficient use of time
  5. Resistance to radical technological advancements

Put bluntly, scarcity leads to war, theft, hunger, and poverty. Individualism leads to hoarding and excess consumption in spite of disproportionate need fulfillment amongst humans. Perpetuation of outmoded processes and ways of thinking is our inclination to be complacent in the presence of a seemingly functional system. “Okay wise guy, anyone can simply ramble about the problems in the world, but how exactly do you suppose we solve these issues?” I’m glad you asked that question. Simply put, the use of technology coupled with a large-scale mindset shift. That is the “point” I’ve mulled over all this time.

“So what do you suggest we do to get ourselves out of this mess, Afika?” So here’s how I see us getting ourselves out of this plight. And so begins my proposal for steering humanity towards some best-case utopian future. There are a number of burgeoning future technologies that have the potential to liberate us from all the problems we’ve faced as a civilization since the dawn of man and help us transcend modern civilization to its next stages. However, there are two for which I am a notable proponent and believe hold most of the solutions to our problems, and others that I believe will make a large impact on human civilization, but are less necessary for change or rely on the prior adoption of at least one of the two other technologies to be effectively implemented. The two technologies in question are the brain-computer interface and the adoption of wide scale automation coupled with the abandonment of traditional financial institutions, representative money, and individual ownership in what is often described as a post-scarcity economy.

Brain-Computer Interface

Life is ephemeral, meaning we all live for a finite amount of time before leaving Earth. This was a reality that truly set in when I was eighteen trying to find my life’s purpose. This lead me to channel my frustrations into art:

TIME. Afika Nyati. 2013. Oil on Canvas

I wasn’t quite sure if I’d ever live see a cure for death, so I decided to preoccupy my mind with constructive thoughts, until one day a realization dawned upon me. If death is inevitable, meaning life is ephemeral, in order to accomplish as much as possible within your life, you would have to make efficient use of your time. This meant minimal procrastination, ensuring all life activities had long-term impact, and planning and thinking in broad timescales. It was likely that I was never going to truly “live in the moment” and that I would be making present-day decisions on the basis of a future that could or could not exist. I understood and acknowledged this, and have lived with the outcomes ever since. Even so, I continued to obsess over time efficiency, and then in 2016 after viewing a Recode interview with Elon Musk, the same interview above, I began to think deeply about the technology Mr. Musk proposed and how it could save me considerable magnitudes of time.

“So what exactly did Elon Musk propose?” He proposed the brain-computer interface, sometimes called the “Neural Lace”, a technology that would deliver a digital extension to the human brain. The primary justification for having such a device is to create a more direct interface between ourselves and our digital devices. Elon Musk calls it “eliminating the I/O constraint”. “What exactly does I/O mean?” In the IT industry the term I/O stands for Input/Output. An input to a computational device acts as a sensor for physical stimuli from the physical world. This could be a camera on an iPhone and or a mouse on a desktop computer. Computational devices use output devices to perform physical operations in the physical world through actuation. In a robot, this would be the movement of its legs to traverse through physical space. By eliminating the I/O constraint, it would be easier for humans to acquire information (eliminating the input constraint) and to act out our thoughts (eliminating the output constraint).

Bandwidth / ˈbandˌwidTH / noun 1. The capacity for data transfer of an electronic communications system.

Still confused? Let me elaborate in more detail using some of the nomenclature we’ve developed over the course of this think piece. Humans have brains that receive input from its five senses. These five senses are the only input channels to our brain; they allow us to see, hear, smell, taste and touch. Each sense varies in bandwidth, with vision outperforming every other sense. The human brain is connected to humans’ limbs via the central nervous system, which enable humans to send signals to their limbs that enable them to traverse their physical environment and manipulate objects in it. In addition their limbs, humans have the faculty for speech, which combined with spoken language equips them with yet another process with which to interact with their physical environment. Their limbs and speech faculty combined comprise the total sum of their output channels. In addition to humans’ input and output channels, they have the ability to store information as mental contents, and process and run logical deductions on them.

Interface / ˈin(t)ərˌfās / noun 1. A point where two systems, subjects, organizations, etc., meet and interact. 2. Computing a device or program enabling a user to communicate with a computer.

Latency / ˈlātənsē / noun 1. The state of existing but not yet being developed or manifest; concealment. 2. Computing The delay before a transfer of data begins following an instruction for its transfer.

Throughput / ˈTHro͞oˌpo͝ot / noun 1. The amount of material or items passing through a system or process.

The brain-computer interface seeks to accelerate the rate of information transfer between the brain and interfaces external to it. An interface could be a computer, the internet, another brain (and the mind attached to it), or any digital processor. Currently our senses, speech faculty, and limbs are the only methods with which we are able to convey (send) and receive information; all methods suffer a lack of latency and throughput performance in some form.

As I mentioned earlier, the drawbacks to spoken language as an output channel are 1) information is susceptible to alteration as information is transferred from encoder to decoder and 2) there is a dissemination limit set on how far one’s information can travel in both space and time controlled by the reach and connectedness of human networks. I’m going to mention two more drawbacks to language as a form of information transfer. The first additional drawback is that the latency of transfer of information from the encoder to the decoder is fairly high. When an information source develops mental contents they want to communicate, the source must first convert those contents into an equivalent message in spoken language. Often this process takes time because there is no one-to-one mapping from a thought to a sentence. This results in many sentences with similar semantic meaning but varying degrees of information loss. Consequently, the encoder needs to perform a quick heuristic search for the translation with the least error. Additionally, humans are better at recognition than recall, so they might spend time attempting to recall words that would better minimize information loss further exacerbating the delay process. Once a message has been successfully encoded by the source into spoken language there is the latency between information dispatch (by the encoder) and information acquisition (by the decoder), namely the time it takes to verbally communicate the message. Finally, once the entire message has been heard by the decoder, it must be converted from spoken language back into mental contents before the information is truly acquired, which similar to the encoding conversion is susceptible to information loss due to language ambiguity. While the second additional drawback varies in magnitude between different languages, all are burdened by it to some extent. This drawback is throughput. As explained above, throughput is the amount of units of information a system can process in a given amount of time. For most languages, the unit is a word (or to be more specific a morpheme); for other languages such as Chinese, they use characters, which often represent more information per character compared to those in other languages. As a result, Chinese would have higher information throughput than English. But even so, the benefits of a higher throughput are often counteracted by the effort needed to memorize several thousand ideographs and the sets of ideas they can represent.

Our other output channels, namely our limbs, also have high latency and low throughput. Imagine the following scenario: you receive a message from your mom on your smartphone and the message contents read, “Sam, I just read an article about an impending artificial intelligence apocalypse”. But you’re a computer scientist, and you know the likelihood of this occurring in 2017 is rather low. You want to communicate this thought to your mother, so how do you go about doing doing so? You type out a message on your smartphone by tapping letters on its touchscreen to form words to form sentences at a maximum rate of two thumb taps at any given moment. Let’s assume it takes you about ten seconds to type and send your message. There was a ten second latency in information transfer caused by the output channel of finger actuation. If you think about it, all our input channels into computational devices work in the same way: feeding button presses, clicks, and taps into computational devices or manipulating the spatial position of a computational input device in physical space to move the spatial position of a coupled digital object in digital space. These interactions are a bottleneck to information transfer.

If you recall, I mentioned that there are two types of information that comprise the information ecosystem, mental contents and transactive memory. Mental contents are information already stored in our brains, and transactive memory is information stored external to the brain in physical articles. Transactive memory is useful because it relieves our brain from the burden of stored large quantities of data, but its drawback is that it has to undergo a conversion into mental contents, prolonged by the time it takes to read and understand the information. Reading as a process makes use of the eyes, our highest bandwidth interface to the physical world. Light reflected from paper enters our eyes and hits their retinas where the light signals are converted to neural signals that are interpreted by the brain. The order and spatial relation of the symbols are then interpreted to perceive language, which is finally converted into mental contents. Much like information transfer between a source-and-receiver-brain-pair, there is latency inherent in this information transfer process.

The brain-computer interface circumvents these intermediary information conversions in written and spoken language and allows one to transfer information from one’s brain and to another brain instantaneously. It’s really hard to comprehend the significance of such a device immediately, so I’ll let the idea marinate for a moment. “Why is this revolutionary?” Ever since humans developed the capacity to retain memory, human’s have had to overcome the difficulty of disseminating mental contents to other humans. I call this the information transfer dilemma. Human’s have best overcome this dilemma by using spoken language which gave them the faculty to share detailed information about the physical environment to each other. As mentioned before, spoken language comes with a set of drawbacks, namely: susceptibility to information alteration, a dissemination limit controlled by the reach of human networks, high latency and low throughput. The second dilemma humans have had to overcome is developing a way for information to persist through time. I call this the information persistence dilemma. Human’s overcame this dilemma initially by storing information in books in the form of written language, but with the dawn of the Information Age information began to be stored digitally on computers. Digital information proved superior on account of its space efficiency, its indefinite persistence and resistance to atrophy, its mutable-friendly quality, and it’s lower transfer latency.

“It’s easy to see why efficient information transfer is crucial, but why is efficient information persistence also crucial?” Simply put, imagine a civilization in which every generation had to reacquire all knowledge previous generations amassed because information could not be transferred through time. That civilization would go nowhere. A good example of this can be found in Egypt’s history; it is speculated that the Egyptians stopped building pyramids because they forgot how to build them. This is what happens when information is lost. When information is persisted through time, the magnitude of humanity’s aggregated knowledge either remains constant or increases. Information is crucial for advancing society because it increases the likelihood of technological advances which by definition improve our living conditions by helping us satisfy more needs in more efficient manners.

The brain-computer interface would provide a better solution to both dilemmas. In the case of the information transfer dilemma, information would be transmitted almost instantaneously. The brain-computer interface would essentially give our brains a direct connection to the internet, allowing us to use its transfer speeds to send mental contents from one brain to any other interface — another brain, the internet or some computational device. In the case of the information persistence dilemma, information is persisted like all other digital data in 2017 — on the “cloud”. Essentially, mental contents, or thoughts if you will, would be uploaded and saved on remote servers much like any other form of digital data is today. As mentioned, this would be advantageous because digital information is space efficient, resistant to atrophy, mutable-friendly, and has lower transfer latency. You may recognize that storing information in books offered similar benefits, so you might wonder why this is a better solution. Simply put, with all one’s mental contents saved on the cloud, one would essentially have a memory extension that they can use to upload and download “thoughts” to one’s brain. This relationship is similar to how one might download digital information from the cloud onto their physical computer’s hard disk. Additionally, one isn’t limited to accessing their information alone; they’d be able to acquire any and all information from the internet. The implications of these two improvements means that humans would be able to speak to each other instantly and free of subjectivity due to minimized information loss caused by translation error inherent in language. Communication could be more rich too; emotions and mental states could be shared. Furthermore, all humans would be able to acquire all known knowledge instantaneously. Gone are the days of spending the best years of your life acquiring knowledge at learning institutions; no longer would millions of humans spend years of their life re-deriving common knowledge. As a result, the Information Ecosystem would comprise solely of mental contents, because all physical transactive memory would become digital transactive memory, which by virtue of every human’s connection to the internet would become mental contents. I hope by now you’re starting to see where I’m getting at.

“So you basically want to make us cyborgs. This is unnatural and goes against nature.” You could call the end result a cyborg, but I’d be reluctant to use that word; it has negative connotations that I don’t believe should be attached to this technology. With regards to the unnatural statement, I’d be inclined disagree. If you recall, at the beginning of this think piece I made an analogy where I anthropomorphized nature as a designer that continually iterates on and refines organisms that best maximize its objective function, survival, through time with the hopes of eventually converging onto some optimal design. What such an intervention would do is simply delegate control of this iterative process to humans and allow us to perform it over shorter timescales. Recall that humans today are in no way an optimal design. Another benefit that comes with taking control of this iterative process is that we will now be able to reconcile nature’s original objective function with our higher-level needs: love, esteem, and self-actualization. Thus, collectively we will be closer to achieving self-actualization and every need below it on the Hierarchy of Needs.

If you’re still not convinced with my argument, consider this: a similar situation occurred shortly after the vaccination was created. There were religious objections to vaccines mostly based on beliefs that the body is sacred, should not receive certain chemicals or blood or tissues from animals, and should be healed by God or natural means. While these arguments still exist today, I think it would be safe to say that most people today support the use of vaccines. If you need to be further persuaded, take Elon Musk’s counterargument:

“By far you have more power, more capability, than the President of the United States had 30 years ago. If you have an Internet link you have an article of wisdom, you can communicate to millions of people, you can communicate to the rest of Earth instantly. I mean, these are magical powers that didn’t exist, not that long ago. So everyone is already superhuman, and a cyborg.”

- Elon Musk, Future of Life Institute: Superintelligence: Science or Fiction

“I can see where you’re coming from. Who would risk money on such a risky concept though?” Funny you would ask that. As far as I know there are two entrepreneurs already making progress on this problem: everyone’s favorite billionaire Elon Musk with Neuralink and multi-millionaire Bryan Johnson with Kernel. As breakthroughs in the area begin to occur, it’s only a matter of time before more money is thrown at this problem.

You might be wondering how the brain-computer interface (BCI) might help eliminate the five fundamental problems I mentioned previously. You might also be wondering how such a technology would transform society. To answer the first contemplation, the acceptance of the BCI would hit problem five, resistance to radical technological advancements, and the adoption of the BCI would hit problem four, inefficient use of time. The brain-computer interface would undeniably be classified as one of humanity’s greatest inventions; to have widespread acceptance and adoption of such a technology would mean a shift in the resistance to radical technological advancements. I’ve already elaborated on how instantaneous communication, instantaneous knowledge acquisition, and a persistent remote knowledge repository would save us all significant amounts of time. There would be little use for formal learning institutions, and all humans would individually have direct access and understanding of all human knowledge. As a civilization, we would undergo an exponential growth in knowledge purely on the basis that more people would devote more time to tackling humanity’s toughest problems. More intelligent people also means more intelligent solutions for collectively satisfying our collective Hierarchy of Needs. If this doesn’t inspire optimism about the future, I’m not so sure I know what will.

Some considerations that do need to be made are the following:

  1. We will need commonly-accepted protocols for organizing such a information infrastructure, taking into account best use-cases and undesirable behaviors, much like there was with internet protocols.
  2. We will need strong cybersecurity and privacy initiatives to prevent the possibility of attacks on users by any malicious agents.

These considerations naturally would only be practically examinable once more research is made on the concept.

Artificial General Intelligence / ˌärdəˈfiSHəl ˈjen(ə)rəl inˈteləjəns / compound noun 1. The intelligence of a machine that could successfully perform any intellectual task that a human being can. It is a primary goal of artificial intelligence research and a common topic in science fiction and futurism.

Intelligence Explosion / inˈteləjəns ikˈsplōZHən / compound noun 1. A possible outcome of humanity building artificial general intelligence (AGI). AGI would be capable of recursive self-improvement leading to rapid emergence of ASI (artificial superintelligence), the limits of which are unknown. An intelligence explosion would be associated with a technological singularity — the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.

The last comment I’d like to make about the brain-computer interface is one where I compare it alongside artificial general intelligence. There’s been a lot of fanfare related to machine learning and artificial intelligence in recent times, and rightly so. Artificial General Intelligence if successfully implemented might be humanity’s last innovation, due to the likelihood of an intelligence explosion happening shortly after artificial intelligence’s ascent to general intelligence. As elaborated above, “the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.” This could happen in the timeframe of a few days alone.

The risk associated with A.I. is that we only get one shot at it; once the switch is flipped there is no turning back; if such an A.I. were optimized with an underspecified objective function, human civilization would likely plunge into irrevocable dystopia. Most theories around this are based on the argument that humans, as a species, are flawed in character, and by that virtue nature would likely do better without them. Another argument is that the AGI would likely to bear no malice against human civilization, until the point at which we get in the way of it achieving its objective function. This is an argument supported by renowned theoretical physicist Stephen Hawking, which he supported with the following analogy:

“You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”

- Stephen Hawking

This is why initiatives such as OpenAI exist today — to ensure we implement this technology correctly. While I’m a proponent of an A.I. future, I think it’s just as important for us to invest resources on the brain-computer interface specifically because of the dangers posed by a malignant AGI. In the scenario where humans have successfully invented and adopted the brain-computer interface, but have unleashed a malignant AGI too, the presence of the BCI and enhanced human intelligence would offer us a chance at competing with machines. In the scenario where we unleash a malignant AGI in a world without the BCI, we’d not stand a chance. To make matters worse, in the scenario where we unleash a malignant AGI in a world without the BCI, it’s likely that we would not even be aware of its supremacy over us once it promotes itself to ASI (Artificial Superintelligence) status, and could incorrectly classify it as being benign. We would be convinced that we are in control much like how cats are convinced they have dominance over their owners because their owners tend to their needs (feed and clean them), but in reality it is the owners who truly claim dominance.

Post-Scarcity Society

Greedy Algorithm / ˈɡrēdē ˈalɡəˌriT͟Həm / compound noun 1. A mathematical process that looks for simple, easy-to-implement solutions to complex, multi-step problems by deciding which next step will provide the most obvious benefit. Such algorithms are called greedy because while the optimal solution to each smaller instance will provide an immediate output, the algorithm doesn’t consider the larger problem as a whole. Once a decision has been made, it is never reconsidered.

For as long as life has been in the universe, it has had to navigate a world afflicted by scarcity. Most meaningful life has been in all-out war for the limited resources stashed in small pockets of the universe, and we humans aren’t any different. The circumstances of our situation have created a world of individualism in which every agent is incentivized by a flawed society to hoard, consume, and divide. We are biological machines in a complex environment running greedy algorithms with the hopes of finding an optimal solution for ourselves as opposed to finding a globally optimal solution. I mean, for most of history it only made sense for civilization to organize itself this way. It was efficient. If everyone specialized and devoted their time to a single resource, collectively as a society we would be able to make more resources and as a result of more resources, more needs would be satisfied overall. With the use of representative money within the resource exchange system, any mutual trade could be made. But as I mentioned previously, such a society also bread a culture of ownership, spoiled by the hoarding and overconsumption of resources. In such a society sharing makes no sense, and because of its zero-sum nature, many suffer as a result. However, I think we’ve reached point in our history where we could be freed of the consequences of scarcity. There might be a way to transform civilization from a zero-sum society to a positive-sum society, and the answer lies in a collective mindset shift coupled with the advocation of an automated world.

Most people have erroneously accepted that scarcity is a fixed condition in our world, and I must say they are not wrong. The law of Conservation of Energy supports this. But what they fail to observe is that society has advanced so rapidly in the last few centuries that we are at a tipping point in history were we could overcome scarcity, but only if we choose to do so. I am in no way declaring that it’s possible for us to live in a world of complete abundance, where any need is simply met and any individual may live a life lead by gluttony. No, what I’m here to tell you is that we have the technology to make the transition from an unquestionably scarce world to one governed by artificial abundance.

“What exactly is meant by artificial abundance?” Simply put, it is a circumstance in which all the basic needs of all individuals can be met all along with some significant proportion of their desires for goods and services, thereby terminating the need for representative money. You might be asking yourself how this would even possibly work? The keys to implementing and sustaining such a system lie in the efficient use of resources, automation, data analytics, and a culture of sharing as opposed to owning.

Lately, there has been an adverse reaction to the increased automation of jobs and understandably so. In the current system, those who are displaced are left with outdated skills, wasted years learning those outdated skills, and the need to find a new manner through which to attain financial resources to meet the needs of their family and self. In spite of this reality, I think it is foolish to resist this increased automation in the long term. Let’s think about this using Maslow’s Hierarchy of Needs as an aid. All humans have to fulfill a set of basic needs to survive. These include, but are not limited to, food, water, warmth, rest, shelter and security. Throughout history humans have had to spend time finding ways to satiate these needs, and we got really good at it. The only issue: humans still have to devote time to collecting and producing the resources to sustain these needs. It might not be all of us, but there are a sizable group of people still invested in this business. Enter automation: we have created robots and machines that are far better than us at performing routine tasks, work at faster rates than us, and are not afflicted with the need to rest, so they can work night and day. These robots represent a threat to the jobs of many denizens, but they also offer the opportunity for humanity to be liberated from the need to occupy our time fulfilling these basic needs. If fully embraced and used to automate all processes associated with basic needs, machines could be delegated lower level jobs while us humans could occupy our time fulfilling higher level needs such as love, esteem, and self-actualization.

Arcology / ɑːˈkɒlədʒi / noun 1. An ideal integrated city contained within a massive vertical structure, allowing maximum conservation of the surrounding environment.

“So you suggest using robots for labor. But this doesn’t solve the issue of scarcity.” I agree with that statement; resources will still be in limited volumes. But at the same time I’d like to remark that at the same time we humans are failing to use available resources efficiently, and this only exacerbates the scarcity problem. Rich people are hoarding underutilized resources, and materials are not recycled, amongst other issues. We need to let go of the concept of ownership, reuse resources efficiently, and track, monitor, and control our use of resources personally and collectively using data analytics.

A set of vertical farms designed for use in China. Image: Vincent Callebaut Architects.

Imagine a world where you are in possession of only the bare minimum: clothing and personal items, and you live in an arcology powered by renewable energy with an integrated store of shared resources that can be utilized on-demand. If you simply need a bite to eat, you walk into the shared cafeteria space and grab a sandwich put together by a sophisticated food robot powered by the sun that sourced its ingredients from the arcology’s integrated smart farm. You’re done eating your sandwich and now you’d like to take a trip downtown to meet an old friend. So what do you do? You request a ride to your destination using one of the city’s shared autonomously driven electric automobiles. I could continue this scenario, but I’ll continue under the assumption that you were able to glean the essential differences of this world: it was governed by the efficient use of resources, whether that be space, food or physical devices, there was a culture of sharing as opposed to owning, and resources were reused habitually. What I failed to mention is that every process happening would be monitored and regulated behind the scenes by a sophisticated computer systems running data analytics on individual and group consumption habits and ensuring the most efficient use of resources, and all of this would be powered using the “free” energy we get from the big ball of energy in the sky and the weather it creates. Theft? Theft can’t exist in a world where everyone has access to all resources. Through these interventions we will solely reach a point at which the presence of scarcity will be obscured.

In addition to the points I’ve raised beforehand about this topic, I think it’s of greater importance to the make the transition to a post-scarcity society due to the fact that many prospective technologies would ruin humanity if rolled out in our current capitalist society. Gene Editing and Artificial Intelligence are two such technologies. Only the rich would be able to afford such technologies, which in the case of Gene Editing would make it possible for the rich to become stronger, faster, more intelligent, and better in every way to a regular human. Do you want to know what this would do? It would further exacerbate the inequality gap and likely create literal biological classes of human, with the poor who are unable to afford the technology being relegated to the lowest humanity class. A similar result would likely happen with the introduction of a true Artificial Intelligence.

“I see your argument, but how exactly would we transition to such a world?”. I think the first step is to collectively acknowledge this vision as better world. Once we are all in agreement, the transition will be significantly easier. There will need to be cross-disciplinary discussion and intervention, especially in ideating ways to support those who face labor displacement due to automation. Nonetheless, I want to express that I don’t think this process will be effortless. We have been entrenched in a capitalist society for quite some time now, and our minds have been conditioned to this societal system. In fact, the brain-computer interface might even be an easier problem given it is purely technical, while the post-scarcity problem involves imperfect entities. The vision might seem overly ambitious, but isolated areas of society already function in this way: libraries are governed by a culture of shared resources used on-demand — the same can be said of the internet. We’re even witnessing in real-time a shift in the automotive industry from human-driven and distinctly-owned automobiles to the adoption of shared autonomously-driven electric automobile fleets, similar to those mentioned in my visioning exercise. This eventual shift will result in better utilization of city space due to a lower need for parking space, less accidents on the road, better utilization and reduction of commute time, and cleaner transportation systems. There’s also been talks of instating a policy called Universal Basic Income (UBI). Briefly, UBI is a form of social security in which all citizens or residents of a country regularly receive an unconditional sum of money, either from a government or some other public institution, in addition to any income received from elsewhere. It sounds like an outlandish idea, but there’s been serious considerations made by countries worldwide to implement or test such a system. In fact, some countries, such as Finland, are already experimenting with it. However while I am a proponent for the utilization of UBI, I see it as an intermediary to a more radical economic system such as the resource-based economy mentioned in ‘The Choice Is Ours’ documentary above.

“Which of the five fundamental problems you identified would this intervention tackle?” The adoption of a post-scarcity society would tackle the first three problems, namely: scarcity, individualism and inefficient use of resources due to hoarding and excess consumption, and the perpetuation of outmoded processes and ways of thinking. It’s fairly self-explanatory to see just how it would do so.

Closing Remarks

It’s taken a long time to get to this point in my think piece; at the same time it’s taken even longer for me to consolidate my thoughts around what I had felt when I was eighteen and to reflect on what I felt needed to be done to ameliorate my frustrations. Although my worldview has been shaped considerably by what I have learnt studying computer science, and this is noticeably evident throughout my think piece, I’m a proponent of cross-disciplinary knowledge transfer, so my overall views are a product of the sum of my knowledge. But more importantly, my views and attitudes towards various subject matter fundamentally resulted from my desire to advance society forward. I truly just want to see a smile on everyone’s face, and wake up to good news when I get up in the morning. We’re too smart to act so dumb; change is needed. I’m often described as an overoptimistic futurist who has his head buried in a nonexistent world — there are truths to that. But when your profession’s underlying goal is to make life easier, you need to be a dreamer.

In closing, I’d like to reiterate the words I articulated earlier in this piece: this think piece is incomplete, it’s overoptimistic about human nature, and is likely fraught with many inaccuracies, technical inconsistencies, and assumptions about how the world works. But nonetheless, I’ve felt the urge to translate my mental dialogue into a physical record, if not for the purpose of starting a conversation, then for therapeutic exercise inherent in the process. This is a bold and somewhat arrogant undertaking; this I understand. But barring billionaire Elon Musk, I see few denizens of our planet actively attempting to deter humanity from precipitating its imminent self-inflicted extinction. I just want to make my contribution using the means I currently have — my knowledge of the current and prospective technological landscape and my voice.

I encourage you to think deeply about these issues and start discussion around them. It’s important for us to “rub our eyes every so often” and take a critical look at the world. There’s a lot of dialogue to be had and a lot of work to be done, this I know. This is why it’s important that we all contribute to the cause in some way. At one point I thought about keeping these thoughts to myself, but I soon realized that doing so would only slow progress to a better future. Hence, I encourage you to take what you will from my thoughts and proceed. If we get it right, maybe one day I won’t have to use 11 000 words to communicate my thoughts to you.

Notable Prospective Technologies

--

--

Afika Nyati

Design Technologist | Human-Computer Interaction | Artist | Composer | MIT ‘18