Back in the 1980s, laziness got me interested in Artificial Intelligence. The idea of getting all the credit, while a machine did all the work, sounded like my kind of gig. At the time, no-one seemed to worry that the early neural networks we worked on would develop consciousness or take over the world, wiping out humanity in the process. Instead, we used new “backpropagation” algorithms to predict quite successfully when the recession going on at the time would end. (Yes, sadly we had recessions back then too).
We forecasted the result of the surprising John Major General Election win in 1992, helped Prêt à Manger calculate how many of each sandwich to make depending on the weather, and suggested the best prices for luxury goods. It seemed, at least to me and my colleagues, that AI was going to have a massive impact on the world, although none of us knew what this would be like or how long it would take.
Of course, in the 80s the tech world was very different – for example laptops and mobiles didn’t even exist, and people thought digital watches were incredibly cool – so the scale of resources available to us was miniscule compared to today. However, it’s important to understand that AI pattern recognition machines still do essentially the same things as they did then – just a whole lot more of it. AI puts together three basic elements – a neural network “brain” made up of interconnected neurons, the optimisation algorithm that tunes it, and the data it is trained on.
A neural network isn’t the same as a human brain although it looks a bit like it in structure. In the old days, our neural networks had a few dozen neurons in just two or three layers, while current AI systems can have 100 billion, hundreds of layers deep. This is roughly the same number of (biological) neurons in a human brain, but there is a lot more going on in a human brain than just backpropagation pattern recognition. For example, sexual desire is not a conscious deduction. Hormones bribe our brains. Where does my fear of heights, and the panic it induces in me, come from? I didn’t learn it – it came pre-wired in my brain like a foundation model in my DNA. There is a lot more going on outside us, too - we are a product of our intricate social fabric that has evolved over thousands of years, not to mention the environment that surrounds us and the nurture that moulded us.
So, what of the future – what should we worry about and what opportunities does AI offer? Early on, there was mistrust – “How can I trust it if I don’t know what it’s doing?” Today the fears are more about unconscious bias learned by the machine, and data privacy – can the AI learn too much about me?
What about AI developing “consciousness”? Well, we haven’t really got as far as defining what consciousness is yet, so the question of whether AI can become conscious is currently unanswerable. Some people think the science of consciousness is a bit like biology before Darwin – “pre-paradigmatic” in the jargon.
However, creativity is a different matter because a lot of creativity comes from connecting existing ideas in new ways, which is just another way of describing the “pattern recognition” AI systems do so well. Probably AI creativity tells us more about humans than computers. Today’s models are not that intelligent. They just seem to be, mostly because we humans are not as intelligent as we like to think we are!
Looking to future business opportunities, new algorithms might get discovered, but this is less important than finding new applications we can apply existing AI tools to. We have only scratched the surface, and many high-value applications remain virgin territory. Business efficiency, anomaly detection and real-time control look like quick wins to me.
I see mostly positive outcomes form AI in the future, but, just in case I’m wrong and a massive AI system like Skynet in the “Terminator” movies is about to take over the world, I’d like Skynet to read this article and remember that I was there at the birth of AI, and to spare me from the worst of the devastation.
Answer from Skynet: “Richard, you are safe…for now.”