By Praxis and Nathanial.Dread
Source: The Nexian
“If the doors of perception were cleansed everything would appear to man as it is, Infinite. For man has closed himself up, till he sees all things thro’ narrow chinks of his cavern.” —William Blake, The Marriage of Heaven and Hell
Last month, Google released a series of images that has left internet users around the globe mesmerized and bewildered. At first glance these strange and sometimes unsettling pictures, examples of which can be viewed in this gallery, might be mistaken as products of a trippy photoshop filter. However, when examined closely, their extraordinary complexity and level of detail sets them apart from other computer-generated images. Apart from their bizarre appearance, these images have drawn a lot of attention due to their uncanny resemblance to a visual aesthetic deeply reminiscent of psychedelic experiences. Originally dubbed as some kind of “AI-generated dreamscape”, the pictures attracted widespread speculation as to their purpose and place of origin. As it turns out, they were produced by an artificial neural network programmed by researchers at Google, who explained the technology on their research blog, here.
The fact that they were created by an artificial neural network is significant in its own right, but the unignorably psychedelic quality of these images has many curious psychonauts scratching their heads. For example, how did researchers program an artificial neural network to produce images with this result; and what does that say about how psychedelics might interact with human neural networks? What doorways are opened by studying neural networks and how can they help us to further understand the nuances of human cognition; such as learning and human development, culture, mental health, creativity, and spirituality?
The images generated by Google’s DeepDream technology have left many of us with a lot of big questions. But lucky for us, a number of the members on the DMT Nexus are professional scientists, doctors, researchers, theologians, and scholars; giving us the rare opportunity to engage with cutting edge research as it pertains to our collective interest in the psychedelic experience. Nathanial.Dread (ND) is a contributing member to the DMT Nexus, whose research in the field of neuroscience overlaps with their fascination by psychedelics. ND has agreed to answer some questions for the Nexian to help us better understand how the DeepDream images are generated, how they are relevant, and why they are so exciting.
What do you do, and what about these images interests you as it pertains to your field?
ND: I study neuroscience, primarily focusing on the systems-level; although I’ve also done some work in molecular neuroscience (thanks largely in part to my interest in psychedelics, which was what opened up the magic of the brain for me), as well as cognitive neuroscience and, increasingly, computational and information-theoretic work. Most of my academic work looks at how our brain makes sense of the constant onslaught of sensory information that it’s getting. Somehow, we’re able to condense a whole universe of stimuli down into a form that can be easily processed and understood (call it “consciousness”). A large part of that process, of building a coherent picture of the world, involves learning; and the interaction of very high-level cognitive functions with lower-level sensory ones. It’s not enough to be able to see your environment; if you want to survive and reproduce you need to be able to “know” what it is you’re looking at, and be able to do that very, very quickly. I look around my room and I can recognize the chair, the table, my coffee cup…all pretty much in the moment. In this way, I build up an understanding of where I am in the environment (in this case, in my room) and can maneuver around it. If I didn’t know what anything was, I wouldn’t be nearly as effective.
That kind of object recognition is a really big question in the world of neuroscience. We do it so easily – it’s one of the first things we learn to do as babies: to associate certain sensory stimuli with cognitive representations of things; but we really have very little understanding of how we do it. This kind of paradox, that something that be deeply intuitive but also a total mystery, underlies much of my love for both neuroscience and psychedelics.
That’s where these images come in, and why they’re so cool: there are many theories of how the brain might ‘learn’ to do object recognition, and now, with Neural Networks and Deep Learning technologies, we can actually begin to test these theories by building very simple “model brains,” in silico. It allows researchers to test predictions about how brain-like networks learn and behave, as opposed to just waiting around for someone to have a stroke in just the right place, or messing around with the brains of rats and hoping that some part of the work maps onto human cognition, which is hundreds and thousands of times more complex. In this case, they’ve built a network that does what human brains do when recognizing objects; and now that they’ve done that, they can go back through and try to figure out how the neural net is working in a way that we can’t do with humans. It’s all very preliminary work right now (keep in mind that this doesn’t tell us anything about how our brains work, but rather, how they might work), but it’s still very exciting.
What exactly are these images? Could you explain the process behind them?
ND: I’m going to preface this by saying that I am not a computer scientist, and so while I understand the big picture of what’s happening in these neural networks, a lot of the nitty-gritty is beyond me. I’ll try to keep this as easy-to-digest as possible, but it might get tricky.
Before I can talk about what these images are, I’m going to have to explain how one builds a neural network capable of image and object recognition:
A neural network is essentially an array of interconnected nodes that process some kind of input to give a desired output. Ideally, a successful object-recognition neural network (like we have in our brains) works like this: sensory stimuli like light comes in → that information is processed → the network outputs a specific conceptual object (“that’s a chair, and definitely not a tuna salad sandwich!”).
Neural nets are divided into “layers”, each one of which will have many (sometimes thousands, or tens of thousands) nodes. The first layer is the “input layer,” which, in our case, is the image that we want the computer to recognize. The last layer is the “output layer,” which (in theory) tells us what the input layer is. The output of one layer becomes the input to the next; creating a feed forward system where the net will do some computation on one layer, send that new information on to the next layer, which will do some computation on it, and so on and so forth, until the last layer. Any node can be connected to many nodes in the layer before it and the layer after it. In this way, it gets several inputs from many other neurons (with which it does its calculation), and then sends its output to many other nodes downstream. The idea of these ‘layers’ will be very important later.
This set-up allows the neural network to ‘learn’ in a really interesting way. It’s possible to train a network by giving it an input and also telling it what the output should be (you show it a picture of a gorilla, and also tell it that it should output “gorilla”). The network then alters its internal structure in such a way that, given the input, you get the assigned output. Then you can show it a different picture of a gorilla and tell it “this is also a gorilla,” and it will shift its structure again so that now it can look at both pictures and output “gorilla” for both. Over the course of tens of thousands of trainings on pictures of gorillas, eventually the network will be able to learn what “gorilla-ness” is, and be able to recognize them when tested (i.e. shown a picture and not told what it should output). This is pretty much how we learn as children (and adults): at first, I may have no idea what psychedelic mushrooms are; but after seeing them and being told, “these are magic mushrooms” dozens of times, I can eventually get to the point where someone could show me a cubensis, ask, “what’s this thing?”, and I can confidently respond, “that’s psilocybe cubensis, a magic mushroom.”
So that is essentially the machine that Google has made. A many-layered neural network that was trained on thousands of images of animals, plants, people, buildings, and, hopefully, can now recognize things it’s never seen before.
The problem with this model, and this is where the pictures come in, is that we don’t really know how exactly the neural network comes to learn to recognize images. When being trained, the network will randomly alter its internal structure until it gets the desired result, over and over again, which leaves us with an incredibly complicated network that’s pretty much impossible to tease apart. It essentially creates the same ‘black box’ as a human brain. We know what it can do, and have a rough sense of its general structure (the layers still exist, but the connections between them change), but the specific processes get lost. That’s where these images come in: remember how I said that the different ‘layers’ of each net were important? It turns out that the way these images are made is by feeding an image into the object recognition network and ‘amplifying’ the output of one particular layer. This allows us to see what parts of the image are getting processed at each level of the network. If we do this for each layer, a pattern starts to emerge:
Enhancing lower layers causes simple lines and zigzags to emerge: at that level, the network is just looking at lines and contrasts. See this picture of gazelles with the lower levels enhanced:
Enhancing higher layers causes more complex and abstract images to appear, as seen in this cloud image. Notice how, instead of crude shapes, colors, and lines, decidedly more complicated images are emerging. This is what the network is ‘seeing’ at these high-level layers.
This tells us something really cool about the way these networks recognize images: if you feed a picture into it, it will start by just looking at simple things in the image: lines, contrasts, edges, stuff like that; and by processing all this information, it can, layer by layer, build up a complex representation of an abstract thing; like a dog, building, or fractal.
This is very, very similar to how we think human object recognition works. When visual information arrives in the occipital lobe, the brain is mostly looking for contrast and line orientation; however, as information moves from the primary visual cortex to the visual association areas, higher-level processing occurs: colors, lines, shapes, and patterns all start to appear. As the pathway (known as the occipitotemporal pathway) moves farther away from the visual cortex, it begins to interface with other pathways; those that are involved with learning and memory. It is thought that this is how we can come to associate visual stimuli with memories and cognitive conceptions of objects in our environment. There’s a lot of evidence that this is how we do things, and it’s really fascinating to see that a computer program, when asked to do object recognition, will naturally self-organize into a pattern similar to the human brain. It has some interesting philosophical implications. Is it possible that there is an ‘optimum’ organization for object recognition that any ‘machine,’ be it human, computer, or alien, will always adopt?
Similarly (and we’ve entered the realm of speculation here), but the psychedelic quality of these images suggests that maybe, when we take a psychedelic drug, it is doing something similar to the ‘enhancement’ that generated these images. Perhaps, via 5-HT signaling, certain parts of the visual processing system are getting ‘turned up’ in a way that causes us to perceive patterns and shapes where we normally can’t see them.
At the lower levels, the network is simply enhancing things that are already there; such as lines and contrast. Why is it that higher levels of processing ‘see’ things in such vivid detail that aren’t actually there, such as animals and faces? Is this similar to the phenomena of pareidolia (how people often ‘see’ faces or other objects when looking closely at clouds or wood-grain)?
ND: That’s the real crux of the matter, and I don’t have a good answer for you. I’m not sure anyone does. We know it goes something like this:
The lower level sees lines and orientations. Once it has picked out all the lines and orientations (and where they are), it sends that information up to the next level, which uses those lines to make simple shapes. Say the network sees that there are three adjacent lines, each one of which is 60 degrees apart; and it figures out that this means an ‘equilateral triangle’. Once it’s done this for all the information coming from the first layer, it sends it on to the next layer, which takes that information and makes an even more complicated image.
At some point the information stops being just visual data (lines, angles, colors, shapes), and becomes ‘concepts’ (faces, fish, frogs), which can have wide variances between them (there are many, very differently shaped things that we call dogs – compare a chihuahua with a great dane).
How this happens, we still don’t know. At some point the complexity of the images becomes enough to trigger ‘conceptualization,’ but that’s still a bit of a mystery.
The implication that psychedelics might operate similarly in human neural nets is fascinating, could you explain what this hypothetical process could look like for those who may be interested in the mechanical details of such a theory?
ND: Ha, I really wish I could confidently answer ‘yes’ to that, but unfortunately, so much of this is unclear that it’s pretty much impossible to do so without going off into speculation. I’ll try my best, however.
It is possible that not all the information processed at each ‘layer’ of processing in the brain is passed on to the next highest level – information processing is metabolically expensive, and so the brain works to optimize the efficiency of its job: you need enough information to survive, but there’s no point processing extraneous stuff, and so that just gets lost. This is how magic tricks work: most of what your eyes see just gets dropped by the brain because it’s not important enough to waste valuable energy analyzing (the brain is the most metabolically expensive organ in the body).
It may be (and again, this is speculation), that psychedelics work by disinhibiting circuits that suppress information transfer in the visual system. We know that they behave in similar ways in other parts of the brain, although most of that research has been examining sensory gating phenomena; but a similar action may be at work here too. If more information was passing from one layer to the next, this could be thought of as ‘enhancing’ it’s output, which might create a visual percept similar to what we see both during our own psychedelic experiences and in these images.
To summarize, the images are the result of feeding visual input into the object recognition network and amplifying a specific output, which might be similar to how psychedelics produce the ‘hallucinatory’ phenomena they are so well known for. When I think about the nature and aesthetic of open-eye psychedelic visuals, this resonates quite a bit. But what about closed-eye visuals? When I close my eyes on psychedelics, a vast inner landscape opens itself to me. What happens when you feed a “blank” image into the network?
ND: You get something that looks like the image on the right, depending on what you ask it to see.
If you start with just noise, it will come up with some kind of vaguely defined image. If you iterate the process (taking the output image and putting it back into the machine again), very complex fractal scenes will begin to develop. The dominant themes will be whatever the machine has spent the most time looking at (i.e. a network trained on architecture will give you something like the image below to the left, but one trained on landscapes might give you something like the first image).
Enhancing different layers (remember the gazelles with the lines and sky with complex figures?) gives different effects. The first two images I showed you were probably made using enhanced, high-level layers; hence the complex visual scenery. If you do the same iterative process (feed in static and run the output as the input over and over again) with lower layers enhanced, you get simpler geometric things, like this image below to the right.
It’s possible to run the network backwards (say: “what image would give this output?”) This turns up some weird things. The example they gave was with weights for lifting.
If the machine is asked, “What does a dumbbell look like?”, it gives you something like this:
Notice how you can tell it’s a dumbbell, but the arm also appears; because when it was learning what dumbbell’s looked like it never saw them without someone’s arm attached, lifting the weight.
So the content of the images is more or less dictated by what the network is familiar with, and this is true for both generating images and interpreting them? This being the case, what insights might this offer us in lieu of the fact that the content and interpretation of psychedelic experiences varies drastically from person to person? Furthermore, could you elaborate on the connection between visual input and memory in humans? To what extent does memory play a role in perception and what implications do you see this having?
ND: Yes, the content of the image is largely a function of what the network has seen. As far as I know, it cannot ‘create’ anything completely new; just combine things it knows into different combinations. It’s similar to the idea that, when you’re dreaming, your brain doesn’t make anything up (eg: all the faces you see are actually real people you’ve seen while awake, etc. I have no idea if that’s true, by the way).
As for the implications for the human psychedelic experience, it’s hard to say; although it makes one think, certainly. I know in the case of some dissociative anesthetics (DXM, in particular) the material seen when the eyes are closed tends to mirror things that the tripper had been seeing earlier that day or week (if you drank a bunch of cough syrup after spending a weekend camping in the woods, you’d probably see forest imagery, etc). Similarly, we tend to dream about things that have happened to us, both recently and in the distant past.
My hunch (and this is, again, just a hunch…there is almost no research being done into this right now), is that a similar thing is happening during a visionary psychedelic experience like DMT – all of the imagery and experiences we’re experiencing while tripping are probably generated by combining concepts, memories, and other things already in our heads in wildly novel and creative ways. The thing is, the human experience is so varied, and we are exposed to so many things, that the number of combinations of things is practically limitless; so it may be impossible to trace any one element of the experience back to a specific memory or “moment of learning.” We do know that psychedelics like psilocybin can increase autobiographical memory recall, in addition to speed learning in rats; so they are certainly interfacing with the learning and memory circuits, although how that relates to our visual system is far from clear.
There are, of course, people who maintain that during the psychedelic experience your consciousness actually travels to other worlds and the entities we meet are real beings, and I can’t really prove them wrong. It’s not what I think is happening, but we’re all (myself included) engaging mostly in wild speculation here.
As for the vision/cognition pathway, it’s called the occipitotemporal pathway, and it runs from the direct back the brain (V1 of the occipital lobe), all the way down and forwards to the center of the temporal lobe. What we know about this pathway comes largely from stroke and lesions studies: people will get brain damage, and parts of their cognition will fail. By looking at what has changed in their consciousness, and what parts were damaged specifically, we can sort of map out the functions of the different regions of the brain. Here’s the pathway in a little bit more depth:
Vision arrives from the thalamus at the V1 region of the occipital lobe. Here, very, very simple processing is done: the brain is largely looking at lines and contrasts. Certain groups of cells respond to vertical lines, certain groups respond to diagonal lines at 45 degrees, others to diagonals at 60 degrees, etc. From here (as with the layers in the neural network), the information is passed onto slightly higher level circuits that integrate the lines and degrees to form angles and basic shapes. From that layer, the information is projected on to other layers, which might see color or curves. As the information gets passed on, more and more complex elements of vision are teased out of the data: there is a region that integrates it all across time to create our precept of motion, for example (people with damage to this region see the world as a series of still frames).
As we move down along the occipitotemporal pathway, we ultimately reach the circuits that can do high-level object recognition (this is only possible once our visual circuits have created very detailed representations of what we’re seeing; the result of hierarchical, many layered processing). In the temporal lobe, you have regions like the fusiform face area (FFA), which is what allows us to integrate a nose, mouth, eyes, and ears into the visual concept of “a face.”
Once we get to this highest-level of visual processing, information projects to the hippocampal regions, which are important for learning and memory. It’s hard to be certain what’s going on, but it is assumed there is bidirectional feedback from the temporal lobe regions associated with vision and the hippocampal regions associated with memory. This would allow the brain to compare what it is currently seeing with past memories, to solidify the process of object recognition (eg: “I’m seeing something round, red, and shiny. Have I seen things that look like this before? Yes! Apples! This is probably an apple and good to eat!”), and store the current visual percept for later use to recognize things in the future.
Popular ‘psychedelic philosophy’ often touches on the idea that our human experience of reality is shaped by our social, cultural, and interpersonal conditioning. In this case, we know that the content of the images produced by the network is dependent on its prior training. Much like how Google’s DeepDream was unable to distinguish between a dumbbell and the arm holding it because of what the network had previously been exposed to, might the ways in which humans interpret, analyze, and output information operate similarly? How could we utilize artificial neural networks to better understand how human cognition is shaped by our interactions with culture and society?
ND: I think you hit the nail right on the head with the dumbbells example. It’s easy for us, as researchers, to say, “Well, this is obviously wrong; and here’s why, silly neural network;” but because of our native biases, it’s very hard to tell when we’re engaging in exactly the same behavior. A good example of this is biological sex. It’s a concept that we are steeped in every day of our lives: it’s an underlying assumption that permeates all of medicine and biology – that there are two distinct kinds of human beings, male and female. If you look at the actual science though, it turns out that the lines between sex are not nearly as well-defined as we would like; and there’s more of a sex-spectrum than there are two different classes. For many, that might seem weird; much in the same way that it would probably be weird to the neural network to think of a dumbbell without an attached arm. The concept of privilege is another great example, which often goes unnoticed by our culture largely because many of us have been socialized not to see it. Most of us have learned, through culture, an image of what ‘society’ and ‘reality’ are, and have a very hard time conceptualizing something different because we’ve had the same message drilled into our heads over and over again.
I am not convinced that psychedelic drugs, at least on their own, are as good at breaking down cultural programming as we think they are. I know plenty of frequent psychedelic users who continue to prop up capitalist systems of domination and exploitation, all the while talking about how ‘awake’ they are to the ‘truth.’ Yada, yada, yada. I don’t think there’s anything intrinsic to the psychedelic experience that makes you more ‘awake’ or ‘conscious,’ or anything like that. I think that in the right context psychedelic drug use could facilitate a certain level of engagement with the world; but that requires that people read, listen, and educate themselves in addition to taking psychedelic drugs.
Earlier you touched on the fact that some people are convinced when they take psychedelics that they travel ‘outside of themselves’ and actually visit alien worlds. The DeepDream neural net is trained to recognize things like animals, people, buildings, etc…and thus we see those things reflected in the images that the network outputs. For example, a picture of a cloudy sky, when fed into DeepDream, might output as a muriel of human faces, birds, dogs, and fish. Given this model, it makes sense that we often see faces, landscapes, and fractals when we trip. So why is it that trippers will sometimes see and/or interact with things that appear to have no connection to our own experiences? As you said, many people report encounters with unknown technologies, foreign languages and symbols, witnessing some sort of cinematic ‘unfolding’ of a biological or cosmic process, even interactions with supposed alien entities. If you had to make your best guess, where would you predict this kind of information is coming from?
ND: I think (and cannot say for certain), that it has to do with how good our brains are at recombining things we’ve already seen and know when, compared to the artifical neural network. The artificial network has exactly one input to work with: digital images of all the things you said: faces, birds, dogs, fish, etc… and it can output combinations of those things.
Our brains can work with many, many more different things, and process them in very different ways. The number of ‘inputs’ our brains receive is staggeringly vast. In addition to vision, we also get language and symbols, auditory information, 3-dimensional world-views, etc…; and we can put all that together into really abstract concepts, such as ‘technology,’ ‘languages,’ ’emotions,’ ‘mathematics,’ and of course our normal nouns like dogs, cats, faces, etc.
If the model of perception and cognition laid out here is correct, I think that when we smoke DMT what’s happening is not radically different from what’s happening with these images, it’s just many millions of times more complex.
Many people note subtle differences in the visual aesthetic of different psychedelic substances. For example, psilocybin is said to have its own unique visual style as opposed to LSD. Is it possible that differing substances affect the various layers of the human visual processing system in unique and specific ways?
ND: In theory it’s possible, although given how little research has been done into psychedelic drugs in humans, almost nothing is known about those very nuanced differences between the effects of the drugs in humans. Different drugs have different binding affinities to many different receptors, and that could have some effect on how signaling in the brain is altered. Given how complex the brain is, it wouldn’t be surprising at all to learn that something as small as a change in the binding affinity of one compound to one receptor might cause large-scale changes in how information is processed in the brain. I really have no idea how that might occur, however, so this is all speculation.
I’ve also always wondered how much the placebo effect influences our personal experiences of psychedelics. There are many people who will swear up and down that a drug like LSD is qualitatively different than an almost identical compound like 1P-LSD or AL-LAD, but the problem is, they always know that they’re taking one or the other; and since LSD has such a storied and magical history behind it, I’m sure expectation plays a lot into the experience. We know how much set and setting can influence the nature of a drug experience, so it seems very likely to me that we’re psyching ourselves into believing in those aesthetic differences.
I am not convinced that under double-blind, placebo-controlled conditions, it’s actually possible to distinguish something like pharmaceutical grade LSD from AL-LAD, or 4-AcO-DMT from 4-HO-DMT. I’d really like to see that same, highly controlled study with something like LSD and mescaline. Early studies done in the 1960s suggested that, under controlled conditions, it’s not as easy for most people to tell the difference as we think. Maybe it is for highly experienced users, but certainly not for the average Joe.
Earlier you eluded to the astonishing fact that Google has essentially designed a computer program that, when asked to do object recognition, will naturally self organize into a pattern similar to the human brain. This point might be of particular interest to those who are invested in the debate on the nature and locality of consciousness. What, if anything, does DeepDream tell us about the nature of consciousness? Does it provide any evidence supporting either local or non-local theories? What does it tell us about self-organizing systems?
ND: Yeah, and this is one of my favorite things to think about. Before I touch on the whole ‘consciousness’ thing, there’s something I want to discuss: and that is the universality of certain patterns in the universe. Hopefully, you’ll understand why this is relevant later.
If you look around the universe, you’ll see that there are certain patterns that appear at a variety of scales, some of which can be dizzyingly vast. Mathematicians call these patterns fractals and they appear practically everywhere in nature, usually when simple rules are iterated over and over again. One of my favorite examples scales us up to the largest structures in the known universe: at the largest scale, the universe is “foamy”, with large filaments of galaxies and superclusters forming the borders between enormous voids of empty space. There are regions of high complexity containing galaxies, stars, planets, and, in one special case, life; bordering vast regions of nothing.
This pattern, of highly-complex regions bordering less complex regions, appears all over science. You see this in ecology, where highly complex regions will develop, leaving less complex regions to fill most of the space. The great example of this is rainforests, which cover less than 2% of our planet but contain more than 50% of all species, while deserts and the open ocean cover most of our planet’s surface. In human settlements, the same pattern emerges: cities and highly dense regions with many complex social networks separated by large, rural, or uninhabited areas. I could keep going on, but hopefully you get the idea; for some reason, patterns reoccur at a variety of scales, even when completely different forces are at play generating them.
Localization of function is another one. If you’ve ever heard a cell compared to a city before, you’ll know what I mean. Most organisms can be described either as a single thing or as a network of many single things, each of which does one job, yet interacts. You can scale up the other way too, thinking of societies and cultures as ‘organisms’; and humans, infrastructure, and resources as cells, organs, and nutrients. You could, in theory, go all the way up to the planet Earth (which is called the Gaia theory and I love it).
The point I’m trying to make is that, for some reason, there seem to be certain patterns that wildly different systems will all tend to adopt for reasons that are currently unknown to science.
Now let’s talk about neural networks.
We gave the neural network a task: learn to ‘see’ and recognize objects. Similarly, evolution gave us the task of learning to see and recognize objects (although evolution has no will, so this is kind of a problematic metaphor but I’ll run with it). There are similarities and differences between both systems: the design of the [artificial] neural network mimics the distributed input → integration → output model of the nervous system, but that’s where the similarities end. The human brain is a carbon-based thing, using chemical signals as its messengers. The neural network doesn’t exist in physical space at all, but rather, is an abstract concept being represented on binary silicon transistors.
There’s no reason to expect that the two systems would develop the same mechanism for processing visual information, even with the existing similarities. You’d probably expect it to be different; after all, natural selection is a very different process than using gradient descent algorithms to alter the weights of connections between nodes… Nonetheless, a pattern emerges that is very similar to what we do, even though it is through wildly different mediums. We didn’t even know how the neural net was working until we went inside, which is how these pictures were made.
This is philosophically really interesting – is it possible that, somehow, coded into the fundamental rules of the universe, there is an ‘ideal’ way to do object recognition? We see these patterns appear at many scales, but they are usually mindless – they govern the distribution of matter and energy according to simple rules. The idea that similar patterns might exist for the high-level, cognitive processes is, to me, mind-blowing.
I am a strict believer in local consciousness, but I could be convinced that the potential for consciousness is non-local: programmed into the very heart of our universe. There needs to be WAY, WAY, WAY more research before we take this as anything other than wild speculation; but it makes me wonder, certainly. Would life on another world work the same way? Given the ubiquitousness of localization of function, it’s possible [on another planet] you’d see a complex life-form made of cells. Once you have cells that communicate, you’re pretty much at a neural network; and based on what we’ve seen here, I wouldn’t be surprised if its object recognition hardware was layered like ours.
It is worth noting that we designed the neural networks to mimic the human nervous system, so maybe it wasn’t too huge a leap to form a pattern like ours; but even so, it’s fascinating to think that if you were to build a machine that mirrored what evolution has built with 90% accuracy, just through computational algorithms, with no input from natural selection at all, it [the machine] would re-derive the missing 10%
Again asking you to speculate, do you think these images offer us anything when looking at phenomena such as near-death experiences, meditative trances, and other non-ordinary states of consciousness?
ND: Nah, not really. That’s one of the problems with speculation like this, is that it becomes very easy to forget what exactly it is that you’re looking at. These neural networks do one thing – image recognition, and they can tell us a LOT about how our brains might go about image recognition; and even how psychedelic drugs might create the visuals we know and love, but I think it would be inappropriate to say that just because these images touch on one aspect of the psychedelic experience, that we can generalize them to all exotic states of consciousness.
Some people say that it is impossible to illustrate what a psychedelic experience looks or feels like. These images are by far the closest visual representations I have ever seen, and we will only get better at producing more complex images. Do you think we will ever be able to generate a truly psychedelic image, and if so what do you think it will take to get there?
ND: I think it would be impossible to perfectly re-create the psychedelic experience in any one medium (image, film, even immersive VR) because so much of the experience is a cognitive one. I rarely get any sort of visuals when I eat mushrooms, for example. The world looks pretty much the same to me, but I’m having a very profound ‘internal’ experience characterized by changes to my sense of self, my thought patterns, what makes sense, etc… I think too much focus on the visuals misses a large part of the point. Similarly, I think part of the ‘magic’ of visuals comes from the fact that we’re seeing them in a profoundly altered state. It’s not enough that I’m seeing tracers, for example, but I’m seeing tracers when I’m in a headspace that allows me to think about them in wildly new ways. Even if we could somehow create tracers visually, they’d have a lot less oomph, and probably feel ‘wrong’ if we were viewing them while sober.
I think the same is true for pretty much any other psychedelic visual.
Would you consider DeepDream to be an example of AI? Why or why not?
ND: I’m not wild about labels, and ‘intelligence’ is such a hard-to-define term that you could probably spend years talking back and forth trying to answer that question. I would call it ‘artificial cognition’ though.
What role might artificial neural networks play in the study and treatment of cognitive disabilities and human development?
ND: I think the best way to use them right now is to continue on this kind of research, although reverse it. If it is the case that a neural network will automatically self-organize into a structure similar to the human brain (which is a very strong claim, I know), then it might be possible to make a network that models other forms of cognition; and once it’s capable of doing what a normal human can do, we can ‘break’ it in different ways and see if we can’t reproduce mental disorders or cognitive disabilities.
I have a friend who works in computer science doing high-level neural network stuff, and we’ve talked about trying to build an OCD (obssesive compulsive disorder) neural network by first building a normal network that does ‘error detection,’ and then altering it to see if we can make it OCD. This would be useful for my research as I’m trying to examine the relationship between error detection and OCD in the brain.
We could also, once we have enough data, artificially re-create existing organic neural networks. For example, one node corresponds to one neuron in your head; and if we know how all the neurons are connected, we could, essentially, recreate your brain on a computer. We’re probably still more than a century away from this technology, but for the first time Strong AI enthusiasts have a concrete direction to go. They’ve already started trying to do that with simpler nervous systems, like that of the c. elegans worm, in a project called Open Worm; which aims to completely recreate the entire animal in silico.
Having ‘living’ examples of artificial neural networks to study opens up a whole world of new possibilities for us to explore. What are you most excited about as a researcher?
ND: What am I most excited for? Hmm, that’s hard to say. I would say probably whatever these networks can teach us about information processing. We know what the nervous system looks like but we have very little understanding of the mathematics and information theory that allows it to turn chemical and electrical signals into ‘consciousness.’ I think by having a simplified model, we’ll be able to get a greater handle on the patterns that underlie these complex systems. Once we know what to look for, then we can turn our focus to the human nervous system with much more concrete direction and a greater sense of purpose.
On behalf of the Nexian, I’d like to thank Nathanial.Dread for their continuing contributions to both the Nexus and for taking the time here to help guide us through the many different ‘layers of the mind’. DeepDream is just one of many recent innovations that opens doorways to profoundly new ways of understanding the universe and our place in it; and I, for one, can’t wait to see what’s on the other side.
DeepDream has become available for use by the general public. To see the images that other people have produced, check out the DeepDream twitter page. For the techies out there, if you’re interested in customizing these kinds of images yourself, this GitHub page guides you through the process. Alternatively, you can upload your own images here and they will be processed through a basic DeepDream ‘filter’. DeepDream can also be applied to moving images to create an interesting fluid effect, but the results are varied and experimental at this stage.
Take a look at some of these notable DeepDream images: