Over the last few years, there has been a worldwide resurgence of Artificial Intelligence (AI) to the point where it dominates almost all business, investment, and ethical narratives. There have formed two extreme views of AI: In one it is believed that AI will augment humans, and in the other it is believed that AI will diminish them, to the point of even threatening humanity’s existence. The truth will lie somewhere in between. Many of the arguments on both sides are informed by the results stemming from a technique called Generative Adversarial Networks (GANs) that has given AI anthropomorphic qualities often associated with human motivations.
The techniques used to train AI algorithms, broadly called Machine Learning, essentially mimic Operant Conditioning, which includes positive and negative reinforcement methods to increase the rate of a particular desired outcome or decrease the rate of an undesired outcome. In GANs, this is operationalized by providing two or more entities with opposite reward or loss functions. One is a generative model, also known as the environment, and the other is a discriminative model also known as an agent. For example, the generator will randomly create images that the discriminator must identify as either a real image or an artificial — or “fake” — image. These two entities are trained over a large number of iterations improving the ability of both entities. Eventually, the discriminator learns to tell fake images from real images, and the generator uses the feedback from the discriminator to learn to produce convincing fake images. This can be generalized to other contexts such as a network intruder (generator/environment) versus a network protector (discriminator/agent).
An example of a multi-agent GAN is the example of “Style Transfer.” In this case the model is provided two photos and there are two discriminators whose goal is to produce a single picture. One is rewarded by conserving the content of the first image, while the other is rewarded by preserving the style of the second image. So two pictures are presented to the two discriminators, say an abstract pattern and an elephant (see below). One will try to make sure the resulting picture is true to the abstract pattern, while the other will make sure the resulting picture looks like an elephant. After millions of iterations, a picture will result that satisfies both discriminators’ loss/reward criteria.
GANs have also been used for text but with less success. For example, as described in a Deep Learning course taught by University of San Francisco Distinguished Scholar (and former Orange Institute faculty member) Jeremy Howard, a bot was developed that would speak like Friedrich Nietzsche. The GAN was provided with The Complete Works of Nietzsche. After a large number of iterations the generator started to speak in a manner similar to Nietzsche, but the sentences did not make sense. While companies are working on language-to-meaning mapping, we are still a long way from being there.
GANs for voice applications are able to reproduce a given text string to life-like voices with approximately 20 minutes of voice samples. Today, some of the most popular impersonations are of U.S. Presidents Trump and Obama. Similarly, in the near future this will likely be done for videos where a script will be provided and video will be generated that reflects the script.
Some of the darker implementations of AI have been in gaming. One example is AlphaGo’s defeat of the world’s Go champion achieved by developing and creating new strategies and theories for the game that have not and would not have been thought of by human players of the game. While this is not strictly an example of GANs, AlphaGo did use Reinforcement Learning.
In a more multiplayer gaming context, an AI upgrade to “Elite: Dangerous,” a multiplayer space simulation, made the AI a significant threat to players; spaceships became incredibly powerful, were better in fights, pulled players into brawls, and attacked them with upgraded super weapons created by the AI – features and behavior that the designers never intended. The software was subsequently patched to disallow such behavior by the AI. Nevertheless, the AI was designed as an antagonist to human players just as AlphaGo was. In this case, the AI exploited a network bug that allowed it to merge attributes in a way that was never the intent of the developers. This allowed it to create super-weapons with extreme attributes. As in the case of AlphaGo, the AI optimized across domains that the opposing human player would not even imagine doing.
Another example that surprised developers is a GAN designed to negotiate for books, hats and balls based on different value systems; a very trivial problem for GANs. Since neither of the agents in the GANs were given a language model but were still required to negotiate, they created their own language. This synthetic language was simply a means to maximize their respective reward functions, and hence can be thought of as a negotiating/communication tool for that specific GAN. It’s not that the AIs were gossiping behind our backs!
Do GANs and more generally all AI represent an existential threat to mankind? SpaceX and Tesla CEO, Elon Musk thinks so. He has advocated AI regulation since 2014. More recently at a gathering of U.S. governors, Musk reiterated that “AI is a fundamental risk to the existence of human civilization”, and the urgent need to be proactive in regulation. His concerns may seem realistic given the rapid advances in GANs which could very well accelerate advancements toward General Artificial Intelligence. While regulation of AI may be desirable, A.I. is still a long way from the fanciful representations generated by the entertainment industry. Between the two limits, there is ample space for AI to improve the human experience.