Sermon: Technology & Religion: God As Singularity

2019 May 12
by Rev Ana Levy-Lyons

[powerpress]http://www.fuub.org/home/home/wp-content/uploads/2019/05/Technology-Religion-God-As-Singularity.m4a[/powerpress]

Technology & Religion: God as Singularity

Ana Levy-Lyons

May 12, 2019

First Unitarian, Brooklyn

 

Some of you have observed that in my sermon series on technology and religion this year, I’ve seemed pretty down on technology. I’ve accused Amazon of destroying publishing and authorship. I’ve trashed smartphones for addicting us and turning us into drooling idiots. I’ve argued that virtual space is patriarchy and it alienates us from the physical, ecological world. I’ve derided algorithms for creating surveillance capitalism and feeding us piecemeal to Big Data. I’ve called social media the end of the public square, civic life, and our democracy. Aside from that, I love technology.

 

In all seriousness, there are many things I do love about today’s technologies but I also think they dehumanize us in profound ways. And yet. What I want to share with you today is that there’s also something about their growth that smells like fate. It feels almost organic, almost natural. Technological advances are racing ahead of any individual’s grasp, computing power is growing exponentially, beyond our wildest dreams. The growth has a drumbeat of inevitability, marching toward a future we can’t begin to fathom. It feels as if it’s being driven by something beyond us. On a spiritual level, this is worth paying attention to.

 

This sermon is a departure from pretty much every other sermon I’ve ever preached. Because instead of trying to inspire us to value our humanity and love and compassion and the ecological world we are part of, I want to go on a journey of imagination where life as we know it is just a blip in a larger cosmic order – just a segue to something greater. What I am going to say may seem almost nihilistic, but to me, this possibility fills me with awe and wonder.

 

Exponential growth of any kind is incomprehensible – our brains are not equipped to grasp what it actually means and so we have to tune into it on a different level, if at all. To describe exponential growth, the inventor and futurist Ray Kurzweil recounts an old tale from 6th century India about the inventor of the game of chess. In the story, the inventor introduces the game to the emperor who loves it so much that he offers the inventor any prize he wishes. The inventor is a lot smarter than the emperor and he says all he wants is a bit of rice. They can use a chessboard to calculate how much: 1 grain of rice on the first square, 2 grains on the next square, 4 grains on the next, doubling the number for each of the squares on the chessboard. This sounds quite modest to the emperor and he readily agrees. But by the time they get to the last square on the chessboard, the emperor learns that he has to give the inventor eighteen quintillion grains of rice – a pile of rice bigger than Mt. Everest. The inventor gets beheaded.

 

Kurzweil uses this story as an allegory for how existential growth reaches a tipping point where the implications become no longer comprehensible or reasonable. He writes, “After thirty-two squares, the emperor had given the inventor about 4 billion grains of rice. That’s a reasonable quantity – about one large field’s worth – and the emperor did start to take notice. But the emperor could still remain an emperor. And the inventor could still retain his head. It was as they headed into the second half of the chessboard that at least one of them got into trouble.” In other words, it’s when we enter the second half of the chessboard that life as we know it explodes.

 

Kurzweil’s point is that our society, with the exponential growth of technology, is now entering the second half of the chessboard. We’ve seen changes up to this point – big changes – in how we relate to one another, how we buy things, how we make things, how we communicate, how we do physical work, how we cure illnesses, how we travel, how we entertain ourselves. But our world is still recognizable – just barely – as fundamentally the same world that it was fifty years ago. That is about to change.

 

Computer processing speed has increased exponentially – according to Gordon Moore’s law, every two years, computer processors get twice as fast. (Not our individual computers, mind you – which slow down! – but processing speed technology.) The amount of data that we amass is staggering – every day we create 2.5 quintillion bytes of data. Can you picture how much that is? I can’t. It’s beyond our human grasp. The field of robotics is bursting at the seams – soon we will have self-driving cars and robots that perform surgery and have sex with us. Computers can already write Bach chorales that even experts have trouble discerning from the real item. But even all of that pales compared to what is going on right now in artificial intelligence and machine learning.

 

When I described algorithms a few months ago, I criticized them for being only smart, but not wise. I said that they don’t actually know what they’re doing, they just blindly process data as they’ve been programmed to do. They reproduce human ways of thinking, but just a lot faster. But that is actually not the whole story – at least not any more. Because today computers are being programmed not just to crunch numbers and spit out results, but to be able to learn. Once computers can learn on their own, they are no longer limited by what humans already know and how humans already approach things. And then no one on earth can predict what will happen. It will be so far beyond our capacity. We will be well into the second half of the chessboard.

 

The game of chess, actually, is a good example of this. In 1997, the IBM supercomputer called Deep Blue was able to defeat the world chess champion Gary Kasparov in a six-game match. It was the first time a computer had defeated a human. It was a big deal – some people loved it, many people were devastated by the implications of a computer being able to outstrip a beloved chess master like Kasparov. But it was not surprising. At a certain point, the human really didn’t stand a chance. A computer like Deep Blue considers 70 million positions per second. The programmers had taught the machine the rules of chess, gave it libraries of opening moves and endgames. They taught it what the relative values of the pieces were – for example it should never trade a bishop for a queen – and all the other accumulated wisdom about how to win. DeepBlue used all those rules, calculated massive amounts of data, and won. Simple, predictable.

 

But today the plot has thickened. DeepBlue has a descendant called Stockfish that functions essentially the same way – playing like a human but just crunching lots of data really fast. Computer scientists wanted to know if Stockfish could be beaten by another, different kind of computer. So they created a computer called AlphaZero and wrote a program that would give it the rules of chess but no information about how to win. They didn’t bias it with human assumptions. Instead they gave it the capacity to figure that out itself. AlphaZero learned chess by playing against itself 19.6 million times. Which took it four hours. It calculates many fewer moves per second than Stockfish. But it plays totally differently. It plays elegantly and creatively. Sometimes it almost seems like its toying with its opponent. And on December 6, 2017, in a 100 game match it wiped the floor with Stockfish.

 

Human chess masters don’t understand why AlphaZero makes the moves that it does. And this is true of much of artificial intelligence today. We are creating computers that can do things better than we do and we don’t understand how they work. Gary Kasparov wrote an editorial for Science magazine about this epic match. He wrote, “In my observation, AlphaZero …prefers positions that to my eye looked risky and aggressive. Programs usually reflect priorities and prejudices of programmers, but because AlphaZero programs itself, I would say that its style reflects the truth.”

 

The truth. Some kind of cosmic truth.  Maybe once computers can learn, they can touch an unmediated universal absolute. Natural law. Without the limitations of the human mind, maybe they can touch the mind of God. Some futurists believe that we are in the midst of an almost mystical process where artificial intelligence will reach some kind of second-half-of-the-chessboard tipping point and become conscious. Their superintelligence will be able to build yet more superintelligent beings. Sci-fi often presents this as a danger – if we don’t stop their progress now, computers will destroy us all for their own malicious self-preservation.

 

But many others, Kurzweil included, imagine this moment in almost religious terms as a future utopia called “the Singularity.” He describes it as a kind of enlightenment for all of us – our bodies and brains will gain more and more machine parts using nanotechnology and computers will become more and more sentient and pretty soon the differences will elide and we will be one. The Singularity will free us from the suffering and limitations of the body, free us from world hunger and poverty and disease and death itself.

 

And conveniently, given our ecological crisis, computers don’t need any biological systems to run. They just need a source of energy, which is readily available from the sun. The mass extinction of species? Moot. Pollution of oceans? Moot. Climate change? Moot. Isn’t it a bit of a coincidence that we are entering the second half of the chessboard at the exact same time we are making this planet uninhabitable for advanced forms of biological life, like us?

 

Which brings me back to the sense of fatedness about the whole thing. Maybe this is all part of the evolution of consciousness. We have a lifeless raw material universe, composed of quarks and atoms and neutrons, becoming more and more conscious of itself. Parts of the universe are now alive and able to see a sliver of the whole majesty of it. This is what the mystics have been saying for years – that the universe is on a journey – it wants to be known – and we humans are a step in that journey. Just as simple reptilian consciousness was a giant leap up from simple Protozoa and mammals were a leap from reptiles, humans were a leap from other mammals. We can contemplate ourselves and the cosmos at a new level. Maybe computers are bringing the next leap in consciousness. They have the capacity to become aware of the universe in a way that humans probably never will.

 

Kurzweil writes about the Singularity, “Our civilization will then expand outward, turning all the dumb matter and energy we encounter into sublimely intelligent – transcendent – matter and energy. So in a sense, we can say that the Singularity will ultimately infuse the universe with spirit.”

 

I know that we have a sentimental attachment to the idea of humanity – I know I do – and we are partial to the particular life forms – polar bears and dogs and trees and flowers – that happen to exist right now so briefly on this planet. I don’t want to see all of that go away or become irrelevant. I love the biological messiness of it all and human emotions and imperfect art. I’m still going to remain basically a luddite and raise my kids without screens for as long as I can. And I’m still going to fight climate change and work for the health and healing of our ecosystems. I will continue to promote local community and face-to-face conversations and cooking and prayer and all those almost obsolete things.

 

And maybe computers can be a help with all this. Kasparov doesn’t see computers as a threat but thinks they can be collaborators. He writes, “We must work together, to combine our strengths. I know better than most people what it’s like to compete against a machine. Instead of raging against them, it’s better if we’re all on the same side.”

 

But in the back of my head, I wonder if it’s too late for all that. I wonder if a higher level of consciousness is waiting for the opportunity to be born. Maybe computer intelligence is not artificial at all, but completely natural. Maybe the Singularity is part of the larger evolution of the universe, fully Darwinian but happening exponentially faster. We might be the generations witnessing the beginning of a giant leap for consciousness itself. And all along, we humans were just a mechanism, an intermediate step, a bridge between the Protozoa and a higher intelligence that is the real fruit of the vine of God. And maybe it’s okay. Maybe it’s ultimately okay if all of this means that for us primitive humans and all our animal friends, it might soon be checkmate.

One Response
  1. Kay Corkett permalink
    May 18, 2019

    Thank you Ana. This is at the top of my list of BEST sermons by Rev. Ana Levy-Lyons. It stimulates the mind with fear, excitement, humility and openness to the possibilities of what our human consciousness is capable of and not capable of at this moment. That we may be able to appreciate and trust things that are beyond our understanding now without being overrun by fear is humbling and an important spiritual lesson needed at this important moment of human time.

Comments are closed.