Time travel will exist in 20 years. Don’t worry, the space time continuum will be fine. We won’t alter our present. You won’t have the opportunity to accidentally hit on your distant relatives. Unfortunately for Homer Simpson, it won’t start raining doughnuts. In the comfort of our own homes, we’ll travel to any date, time and location we have data on.
Semantic Time Travel is coming, and it’s going to change the way we experience history.
IBM’s Watson is our first major milestone in advanced computer semantics and natural language processing. For all of its staggeringly impressive displays of deep analytics and machine learning, Watson is still blind and deaf. It doesn’t understand what people look like or why they are important, it can’t hear a bird call and accurately identify the bird. We have access to endless data sets on everything from elevation, climate, indigenous flora and fauna, fossil records, materials, culture, people, movements…but Watson doesn’t know how to translate them into anything other than “Alex, What is X?” It’s time to embark on the next grand challenge: procedural virtual reality.
There are no boundaries to where or when we can Semantically Time Travel, our only limiting factor is a rolling scale of accuracy based on the data we have available. Watch T-Rex and Velociraptors hunt Triceratops in the late Cretaceous. Spy the early construction of the Great Pyramid of Giza in BC 2584. Overlook the religious battlefield of the Crusades in 1100. Jump to any random date, time and location on earth and see a simulation depicting period appropriate day to day life and language.
Computer Vision requires a lingua franca to tackle the challenges ahead. As processors get faster and networked machines begin to deep crunch massive visual databases like YouTube, Flickr, Facebook and the world’s archives and museums, we need to accurately detect, sort and tag minutia to specific objects, concepts, people, eras, etc. As this is a project that is orders of magnitudes bigger than the sum of its parts, a common standard must exist among competing silos.
For the fuzzy pixels lost in translation, the environment will turn to Mechanical Turks for the human touch. Especially in the early days, a library of verifiable, accurate information is needed to start scaling to new imagery and video. SEO and integration with major search providers like Google and Bing will be crucial for a strong base layer.
To add spatial recognition and material analysis to the mix, Photogrammetry will provide systems a first pass of 3D data. By matching RGBZ data to tagged objects, the backend AI will begin to learn what different materials look like in different contexts, including time of day and weather conditions. For example, bark on an Elm Tree has a different texture and average hex code than bark on a Dogwood. Using embedded location data on photos and videos, we can begin to distinguish differentiation between the same object or genus in different regions.
Robotics research will spur exponential improvements in machine learning and artificial intelligence. These advances will be necessary to form new associations between detailed data sets generated by the above tools and pre existing knowledge bases like Wikipedia. This one two combo will serve a huge role in developing the connective Ontology tissue between specific objects, people and locations.
Text to speech and translation technologies need to become radically advanced to recreate dialects and instantly translate generated dialogue in the synthetic scene. Connotation, accent and era appropriate slang will have to be blended to create a convincing atmosphere. Artificial intelligence breakthroughs on conversation and tone will allow the time traveler to overhear side conversations as they might have been spoken in any context at any snapshot in time.
No one company or server farm will be able to climb this mountain alone. A major distributed computing project will begin to allow users to donate cycles on home machines and living room hubs. As Moore’s Law continues to scale, the project will naturally accelerate. Machines with dedicated GPUs will analyze small bits of visual data, while those without will focus on connections between tags and objects.
The Semantic Graph
It’s a little daunting, procedurally recreating the world around us, from the molecules deep within every atom to the stars above. A Semantic Graph of Operations must exist to prioritize how we create these scenes and build a realistic view of the world.
I’ll preface by saying there is no way I can do the imposing nature of these steps justice! This is intended to be a guiding light to turn your gaze forward.
The graph requires some embedded pre-requisite knowledge that will be repeatedly referenced throughout the simulation.
Pre-Requisite 1: Energy and Physics
Mathematical models regarding the transfer of energy, especially radiation, will create detailed interactions in lighting, physics, procedural animation and more. The digital environment will only be as realistic as its lighting. Giant leaps have already been made in real time lighting models, with Unreal Engine 4 offering the first dynamic sub surface scattering available in a major production tool. Future headway into the understanding and calculation of these energies will blanket scenes in photorealism, and help push past the uncanny valley.
Pre-Requisite 2: The Elements and Chemistry
To get big, we have to start small. The chemical elements that make up the universe have to be defined, with all the scientific data to backup how they interact, the systems they create, and the structures they build. The context and naturally occurring location of each element at different points in time will shed light on major global events and eventually resource management.
Pre-Requisite 3: Material Sciences
Building on atomic structures, the project has to understand different materials and their properties. Stone differs from obsidian, early bronze metallurgy differs from steel, water differs from oil, mud and clay differ from concrete. As we move into the age of nano materials, the characteristics of the world around us, at a molecular level, are rising into the limelight, and are even being altered. Using an overarching class identification system, a database of microstructures, properties and elements, will inform how objects and materials react under different stresses and energies.
With this fundamental knowledge at its core, the system can now begin to recreate the scene.
Step 1: Geography
The first thing we’ll see on our virtual trips is a grounded representation of earth. Drawing on topography, elevation, cartography, remote sensing and simulation, we can jump to any place on earth, at any time, and see a barren representation as if no animals, humans or plants had ever lived on the surface. At this stage, what’s critical is accurately representing the geographic landmarks that help sell the reality of a location.
Step 2: Climate
Because we’ll know our coordinates, altitude and terrain, the simulation will identify our position in relation to major bodies of water and generate an approximate of the weather. Temperature, humidity, atmospheric pressure, wind, precipitation, atmospheric particle counts, and astronomical data will coat the scene in ambiance and later influence approximations down the graph. We have incredibly accurate data from specific moments in history, but for most others we’ll have to rely on representations from written accounts of the event.
Step 3: Wildlife
Most of the flora and fauna that share the world with us have yet to be discovered. The approximate 1.7 Million identified vertebrates, invertebrates, plants, fungi, and microorganisms all have vastly different traits and live in ecosystems where diminutive change can have colossal repercussions. Bringing creatures to life requires an understanding of taxonomy, evolutionary biology, fossil reconstructions, migratory and extinction data. Authenticity comes from the scale, mass, movement, sounds and simulation of the polyphagous systems of the animal kingdom.
Foliage’s tensile strength, calculated using the material science pre-requisite, will allow plants to realistically react to wind and other action in the brush. Auxiliary AI controlling photosynthesis will change a plant’s natural growth pattern. Interaction with local populations will change the probability of access to the sun, infection and death.
An organism’s kinematics will be largely derived from the analysis of the biological system, specifically the relationship between the skeletal and muscular systems. Complex motion, including secondary movement, weight, and the exchange of forces will apply a level of physicality to the smallest spider and the largest mammoth. Sounds will reflect an emulation of the respiratory system. Communication between different groups can range from pheromone trails to facial expression and gestures. A catalogue of intraspecies and interspecies relationships needs to be developed to fine tune the AI.
Step 4: Humans
As a species, we throw a wrench in the evolutionary game. There’s before us, and there’s after us. This step is to define our transition from ape to homo sapien, including regional adaptations that have occurred in the last 10,000 years. Recovered artifacts, skeletal structure, brain size, and migration patterns will change our AI to unlock different levels of understanding in response to new discoveries and wildlife. Inherited traits among groups and immune system development will exist at this step. Any factors that can effect the human body, such as risk for disease or illness within segments of the populace are built out here.
Step 5: Culture
This is where things get crazy interesting. After our genetics are filed, the final layer delves into the subject of anthropology.
Language and other means of communication including gestures and facial expressions will be mapped to different times, locations and connotations in history. Thumbs up was not always so kind. The latin Pollice Verso, “with a turned thumb,” refers to judgement of the ancient roman crowds towards a Gladiatorial victor or defeat. It was adapted in medieval europe to seal business transactions, and inevitably became associated with good feelings in western cultures. In the middle east, the gesture represents the equivalent of “up yours.” These variations in a simple hand signal highlight how important context is. Geotagged and evolutionary semantics add a whole new layer to to this complex rosetta stone.
Class, gender and race roles in different groups will explore the development of occupations past the hunter gatherer archetype. As we became more efficient and began to settle in units, new opportunities emerged. This substep looks at both the psychology behind each job, but also the evolution of the job itself, in relation to needs of the collective.
Clothing is very deeply tied to status, religion and sexuality. Materials and styles have always been region specific to protect from the elements, with more ornate displays featuring rarer cloths and precious materials reserved for leaders and those in social standing. Pigments, dyes, sewing tools, production techniques and fashions based on representations in art and archaeological finds will accurately clothe agents in the virtual world.
Agriculture and other means of sustaining our numbers effected our diets, settlements and expansion. Mapping certain crops and trends in animal domesticity to different locations over time will reveal trade routes, change what was sold in local markets, and adjust the virtual biology of the local populace.
The arts add flavor to our lives. With major preservation efforts underway today such as Google Art Project, we’re being given a principal look into the creative spirit of great artists. Expressive mediums spanning everything from acting, animation, comedy, culinary arts, dance, design, film, games, illustration, music, painting, photography, sculpture, stage and TV will be catalogued and traced through time. Era and location specific displays will fill the simulation.
Tools and objects have advanced from arrowheads to iPhones in 72,000 years. A massive database of artifacts needs to be constructed, referring back to the material sciences pre-requisite, that demonstrates the make up and purpose of devices used in daily life. Through a new product classification system, we can map advanced functions and new human abilities to growth in different disciplines and changes in behavior over time.
Science, invention and discovery through the ages has long sought to answer the question “why?” Thanks to research journals, field work, and attributable breakthroughs, we can examine the effect on our understanding and its relation to religion and civil advancement.
Recorded texts and historical tomes often give us the only glimpse we have of ancient civilizations and empires. Semantic analysis will cross reference different perspectives on the same event or subject to create a realistic view of a scene or culture. The chronology of the source material will be important to distinguish how accounts change over time. If available, additional information on authors will be taken into account.
Government and economic models have come and gone as currency and social norms have shifted. The effect of these models on the psychology and GDP within class order will have an impact on citizen AI. As an example, slavery, while an abhorrent practice today, was a key component in resource allocation in now extinct economies. The stability of an economy and government sets expectations in transactions and can help give a more realistic impression of how commerce and hierarchy would have existed at different points in time.
Religion, philosophy and mythology have been at the core of our cultures since the very beginning. Wherever there is lack of understanding, there has been explanation, questioning, and metaphor. Myths are at the heart of stories that have been retold for generations. Archetypes that were true thousands of years ago still hold up today. Semantics will be used to chart tales between cultures and continents. Religion and its impact on social groups will influence a secondary motivation in AI agents.
Movements and events can be brief moments in time that live forever. Iconic leaders, battles and cultural revolutions carry symbolic and societal changes that are felt worldwide. Acumen towards the causes, characters, settings and effects of these moments will add depth to their recreations.
As self broadcasting becomes pervasive in the 21st century, modern status archives will give an unparalleled look into our zeitgeist down to the minute. Usenet, IRC, Instant Messengers, Myspace, Facebook, Twitter, Foursquare…all of these services illuminate what people were talking about and in what connotation. Check ins chronicle where we were, and can show trending locations for every hour of every day in every city. The media we publish on sites like YouTube or Imgur track what was popular in different social groups and categories while preserving the original assets. The Internet Archive will show how the most disruptive invention of the 20th century evolves as we do.
We have more information on certain moments in time than others. Using these mile markers on the space time super highway, we’ll simulate transition between the two closest data sets. If more information is discovered and appended to a specific subclass of data, the sim will recompute and become more accurate.
To recreate a moment in time, we need to fall back on the knowledge available for distinct eras.
Let’s say, for the sake of argument, we want to travel back to renaissance era Florence to witness the mad monk Girolamo Savonarola and his followers burn banned books, art and cosmetics in the bonfire of the vanities…where would we begin?
To accurately depict the scene as it has been recorded in history, we need to draw on a complex web of facts and patterns. Following the Semantic Graph, the landscape of Florence will be generated and flush with cypress, chestnut, holm oak, beeches and fir trees. Roe deer, wolves, wild goats, and boar will roam the outskirts, while hares, rabbits, foxes, bobcats, weasels and martens skulk the underbrush. Over a hundred species of bird will fill the treetops and provide a rich ambient soundtrack, including northern gannets, red kites, ospreys, falcons and sparrows. Referring back to archived maps of the day, streets will be cut from the forest and the city will sprout up. Written and artistic accounts of districts and landmark buildings will shape the architecture, hygienics and occupations within each neighborhood. AI agents for different classes and allegiances to Savonarola will fill the crowd in the main square. Hero AI of named historical characters will be placed in position. References to temperament, objects and religion in accounts of the event will drive language, the style and material of burned vanities, rituals, and the dress of followers and locals. Background knowledge pertaining to the Borgias papacy and Savonarola’s splinter preaching will assist in tweaking spoken statements.
Extreme freedom is allowed in this space. Interactivity will scale depending on your chosen vantage point. Observing as an invisible, non-existent citizen will allow you to roam in any direction without breaking the simulation. If you choose to enter the first person view of a bystander or a historical figure, you lose the ability to control movement, but gain a unique, never before seen perspective on the event. Eventually, the option to alter history by taking control of a citizen or main character will let you leave your mark and see how things might have played out. This is where the platform diversifies to power the next generation of creative interactive entertainment.
We tend to live in a world of self fulfilling prophecy. The science fiction of the 50’s and 60’s has driven many of the technological advancements we take for granted today. The power of vision towards something beyond the current realm of reality has challenged us to action throughout history. There are endless possibilities when it comes to how we’ll experience semantic time travel, but I have a few theories, given what we’ve dreamed before.
The holodeck as popularized in Star Trek will one day exist. A room built for the senses, it will transport us to any environment we can imagine. As a bridge to this experience, dedicated spaces enabling free range of motion on a 360° treadmill, combined with haptic feedback in our clothing, augmented reality POV and advanced climate and olfactory control will let us roam the virtual world using our full range of motion. By matching virtual boundaries with the physical boundaries of the space, advanced motion tracking will reposition the 360° treadmill to simulate walking up to a barrier or object. Using haptic gloves, running your hands over the real world wall will give a sense of texture from the simulated space. We’re still a ways off from conjuring matter out of thin air, or providing resistance around invisible objects, but one day the sorcery will cease to be science fiction, and enter science fact.
For years we’ve been trying to map the human brain to unlock its full potential, supplement our learned knowledge with existing databases and bottle the wilds of imagination. Signal breakthroughs have powered advanced prosthetics in the last decade, enabling amputees the return of fine motor function in synthetic limbs like Dean Kamen’s Luke Arm. The first recorded images deciphered from brain activity were put to digital video at the end of last year. The vision portrayed in the above ‘PlayStation 9’ ad, and countless film references, such as Total Recall, predicts we’ll wear a device that enables us to lucid dream at will, with the stage set through a computer program. At some point, a collective consciousness in virtual environments will be available. The ethics and time commitments to these experiences will be heavily debated and regulated.
We’re closely approaching the day Transhumanism reaches the mainstream. In video game series like Assassin’s Creed, we’re already able to join long dead counterparts to relive history. Deceased musicians are starting to take the stage once more in animated holographic concerts. When our quantified self and social data merges, will we become the next permanent fixtures for a future Semantic Time Traveler? The technology has global repercussions but the potential is limitless. Let’s get started.