Artificial intelligence (AI) is not a new concept, in fact it’s been around for about 70 years.
It’s defined as simulating human intelligence in a machine or software, programmed to think and act like a human.
As you might expect, philosophical and ethical arguments about the consequences of creating such angels or monsters with human-like intelligence has been explored in books, films and philosophised about since ancient Greece.
When you hear the term AI banded around today, it’s because of very specific developments in what is now called generative AI and the use of large language models. Bear with me.
These new models generate text, images and other media based on what has been learnt. They’re referred to as AI models and they’re capable of learning patterns and structure based on the data they’ve been trained on. Once trained or programmed if you prefer, they’re capable of generating new data which has similar characteristics.
This development has created a financial storm of investment, comparable in scale to the internet in 1995 or smartphones in 2010. About $40 billion of venture capital has flowed into this new great hope of science so far. It’s hard to find a company these days which isn’t talking about AI and how it’s fundamental to what they do.
But what impact is this new tech having on politics, business and the sciences and how might it effect society and life? Is it going to improve all our lives for the better or make them much easier to control?
Like most of the previous information technology developments, it’s the same toolmakers by and large of the knowledge economy who have been making most of the moves. US behemoths like Google (Deep Mind), Microsoft and Meta (Facebook), have all moved fast to establish their models or invest heavily in others.
Open AI, a less familiar company, has become the most famous pioneer in this space though with the release of their Chat GPT model last year. Microsoft, their biggest investor with $13 billion is also now a potential competitor, as they’re hosting Meta’s Llama model on their Azure platform.
Some former Open AI employees took their expertise and formed another new version of the same called Anthropic. Although Google was an early backer with $300m invested, it’s chump change compared with the recent $4 billion investment from Amazon.
Aside from all this monopoly money sloshing around the model makers, there has also been an explosion of start-ups focusing on some of the more obvious applications which sit on top of the models, like legal chatbots and virtual doctors.
Another area of interest which is being explored for future application is the use of much smaller data sets. Instead of needing to run on thousands of computers, what if it could be run on your phone or fridge.
That’s not as daft as it might sound. AI could be used for power management in a fridge, for instance. Once it has data on when the door is most likely to be opened and for how long, it can predict when to lower the temperature to compensate.
This bright new world still has several constraints which could upset the party. One is the amount of computing power needed and the other is original data. Although there is a physical issue of chip availability right now, the longer term challenge is having reliable, quality data to feed the machine.
It has been estimated that the power needed to train new models is doubling every 6-10 months. By 2026, training a model like Chat GPT is expected to cost around $1 billion.
It seems like a lot, but not when you consider that the new Tesla factory being built in Dallas, Texas is estimated to be costing $10 billion and semi-conductor manufactures spend multiple billions building their fabrication plants.
What does appear to be the case, if you want to be a data modelling player, is the significant scale of investment needed to even have a chance of success.
And there might be another fly in the ointment - copyright infringement.
Many of these new models are using data which is publicly available, but do they have the rights to use it for this purpose? It does suggest that companies with their own proprietary data sets will have a distinct advantage.
Adobe developed a text to image generator called Firefly, but all the images in the model they’ve trained are from their own archive of stock photography.
So will these large, energy sapping AI models lead to gigantic innovative leaps forward in thinking and discovery?
The answer, might be to think more in terms of AI being a tool, a bit like a microscope. As such it can do things that humans can’t. It was no coincidence that there was a significant shift in medical discovery and understanding after the microscope was invented.
AI models are statistical machines not capable of original thought or innovation (yet). But it doesn’t mean they can’t help lead to significant discovery just as the microscope has.
Already, a new antibiotic has been discovered by Regina Barzilay at MIT using AI. The data set created defined how antibiotics work against the particular bacteria being studied. This resulted in the human aspect of the study being far more focused on one or two promising specifics which led to the discovery.
In another medical example, AI has proven to be more accurate at detecting early stage cancers than the oncologist from the data available. Machines don’t make mistakes and they never get tired.
The two most powerful forces which will directly impact the rest of my life and the rest of this century is the transition to net zero and digital transformation.
It is AI’s ability to consume huge amounts of data, sorting signal from noise, which makes it so important. Being able to recognise the trends in emissions, will help determine whether the reductive actions being taken are working. By automating the acquisition, analysis and reporting, human resources can be better spent on the development of new and better strategies.
The Danish company, Vestas Wind Systems, uses AI to make its wind farms more efficient by adjusting individual turbines so that the downwind turbines don’t suffer from any air turbulence.
Global Forest Watch, an open-source web application uses optical and radar imagery to detect tracts of forest that are being cleared for new agricultural plantations. The system detects colour, size, shape and pattern, even when there’s cloud cover. This is a real-time view, with local precision on a global scale, meaning alerts and data can be served to the appropriate government.
AI will, if nothing else, make the human race accountable for its own destruction.