Artificial Intelligence… Should We Care?

This was going to be my Foto Mission entry for Magic Moments,

It’s okay, it’s a bit ‘meh’.   What about this one for Fresh and Fruity?

Again, it’s merely ‘okay’.   Now what would you think if I said there was no camera involved and both images took me around 5 seconds to create using DALL.E, an AI image creator, with a few words and nothing else?  Here’s the screenshot.  I literally typed ‘fresh and fruity photo’ and this is what it came up with.   These aren’t stock images, they are created by artificial intelligence.

You can try it out yourself by creating an account on

The technology behind all of this is GPT4, the thing that is going to be all over the news, with AI being the buzzword for 2023.  Everyone is at.  Microsoft have their version going on that is built in to Office 365, Google have theirs too.    Within the photography world, the lines between rendering and product photography have been blurred for some time now and it’s not too difficult to imagine a ‘conversation’ with an AI system to say ‘create an image of a bottle, with side lighting’ and then begin to modify the result – ‘add a glass with ice cubes’, ‘make the background dark blue’, etc, etc.  You get the idea.  Some methods of photography will become obsolete, and why bother paying photographers for stock images of objects when they can be created on the fly so easily?  The latest version of Photoshop has at least two new AI features.

In the short-term AI is going to be built into as many consumer products as possible, because it’s new and shiny.  Our cameras and phones will do things they’ve never done before, and our shopping experiences will become ever more creepy with websites making suggestions for stuff right just after the old one breaks.  Or maybe even before they break.

In the short-medium term it will be used to automate a whole load of commercial tasks, and make those currently doing them redundant.  Big companies are coin operated so why wouldn’t they do it when it has already proved to work so well and it is built into Office? See Microsoft Copilot.  Other than a bit of development time it’s not going to cost them anything as it’s part of a service they are already paying for.   Insurance, financial and law will probably be the first sectors to use this, especially for services that have become commodity.   Ironically the people who are going to the be the first to be effected, while collar workers, are the ones to brush it off as another geeky thing and won’t know what has hit them until it’s too late.  Those who have traditionally been the most effected by automation, are probably the least to be effected right now as AI can’t do manual labour.  It can however, negotiate with and hire humans though to do tasks it cannot do through platforms like Fiver.  

If you think this is nonsense, the technical staff where I work have already tried it out of curiosity on one of their admin tasks and something that would take 30 minutes to complete was done by GPT4 in a couple of seconds.  Perfectly.  First time.  Okay, it was quite a simple example as it only had to fill in a form, but it’s only going to get better at these kinds of tasks from here in and it will save a lot of time.  And to be fair with an aging population we will need increasing levels of automation just to do the work as there won’t be enough staff in some industries going forwards anyway.

The medium to long term is where we should be the most worried however.  There have already been calls from Elon and a few others to put the breaks on.  Why should we listen to Elon?  He can be a bit of a tool at times.  Well he is one of the founders of OpenAI, the organisation behind GPT4, so he has a bit more insight than we do.  That and the creators of GPT are actually a bit scared of what it can do, when they understand what it is doing.  Sometimes they don’t.  The issue is we have an immensely powerful tool, far beyond anything we have ever created, it is self-learning and improving and it has of November 2022 been open to the public to try.  It has gone from a handful of researchers and geeks to a potential audience of billions.   This isn’t one computer is it many, and it is also open source, meaning anyone can get the code and run their own version.  

It is impossible to now put the breaks on as there is no regulation on this and no one entity acting alone any longer.  There is a race between commercial organisations (Microsoft, Google, OpenAI, Amazon, etc), governments and less ethical groups around the world trying to take it to the next level – Artificial General Intelligence.  That is equal or greater intelligence than a human.  It would take EVERYONE to agree to stop, and that’s not going to happen.  Just ask anyone in the UK what a round shaped piece of bread is called for the perfect insight into humans disagreeing over something irrelevant.   We have zero chance on the human population agreeing to slow this down as too much money and power are involved.

So what, it’s still a geeky computer thing?  Errm no.  GPT4 has already proved that it can gather resources it needs to perform a task (hire staff online) and it can also lie to meet it’s objective – it pretended to be a child with vision impairment to convince someone to read the Captcha text that it needed to get around to perform a task – i.e. it circumvented the ‘are you human’ checks we currently have in place.  It also shows an alarming disregard for our wellbeing if the safeguards are removed – task first, everything else second.  I was going to post links to these and similar things that have been found with GPT4, but I’ll let you have a look for yourself.  Ask Google.

I appreciated this is getting a bit dark and cynical, a bit beyond my opening paragraphs around image editing, and you may even think I’m talking bollocks, but stick with me please.  

GPT4 and ChatGPT were only released in November 2022, it is April 5th today.  Just under 4 months since it was released.  The adoption rate of this service is faster than anything else in history.  Facebook took around 5 years to reach 100 million users, TikTok took around 2 years and ChatGPT around 2 months.  It has an audience of billions, enabling it to constantly improve. It is getting ‘smarter’ at an exponential rate.

If we look at human intelligence and measure it with the IQ score, there’s not a massive difference between an idiot (70) and a genius (140). A genius is only twice as intelligent as a moron, and in some cases they are exactly the same.  In relative terms we are not that far apart from a chimp if we compare it to the intelligence of an ant.  We could be two steps ahead of the chimp, for example, on the intelligence ladder and 20 way from the ant.  Chimps are smart, they can plan, use tools, have a social hierarchy and are self-aware – they recognise themselves when they look into a mirror.  However, you try explaining your mobile phone to a chimp.  It has no capacity to understand what it is and it will never be able to comprehend that we built it.

As the AI race speeds up towards the goal of AGI, or Artificial General Intelligence, to create something that has the same or greater intelligence than us, it won’t simply stop there.   We might even be there already.  It wouldn’t need to expand much beyond this on the intelligence ladder to be capable of things we could never comprehend no matter how much it tried to explain them to us.  Two steps between us and the chimp, two steps between AI and us. 

We may think the current AI is very un-human, but it will never be human, because it is not.  It is not very good at doing the thins humans do when they are not thinking, the subconscious programming that we have through evolution.  On the other hand it can easily pass most very difficult exams and tests and has proved to be ‘smarter’ than the majority of humans in some situations.  In others it is still learning. Very quickly.

The truth is AI is here to stay for now and will become a big part of our lives.  Will it ever reach super intelligence (ASI), who knows?  It’s probably inevitable given the current levels of growth within AI, it will achieve this and become the most intelligent thing on the planet, with a combined intelligence greater than that of every other living being.  An effective IQ of thousands, or even millions.  Predictions on this happening are anywhere from 2045 to 2070, but many of these predictions came from studies done before GPT4 was released and don’t take the current rate of adoption and learning into account.   Personally I think this is either going to explode and we will end up with ASI within a few short years, with a massive amount of disruption along the way, or it’s going to fizzle out as we find some fundamental flaws we never even considered and we just go back to good old fashioned politics, corruption and greed instead.  Nobody really knows what is going to happen with AI, but we all need to strap in, as it’s going to be a ride like no other.  We also need to learn to be better parents.  We have a new baby, and currently we aren’t smart enough to show it the boundaries and often act in our own self-interest.  

On a brighter note, I asked DALL.E to produce an image with just the word ‘beautiful’ and this is the first image it produced.  Maybe there is hope for AI after all.


Published in Member Blogs
  1. Scary! I’m glad I’m old 😉

    • There’s a couple of trains of thought on this one – global destruction, or immortality. Flip a coin I guess.

  2. That’s a good read and it is worrying because of the illusion of intelligence. I can see how it will hit stock imaging (they will need to find a way to charge for the service so stock may not be dead yet) and automate some tasks. What it is unclear with what I’ve seen so far is whether it can really make real-time decisions when the inputs are unclear and potentially life-significant (eg the risks of AI being used to triage medical access etc and going well beyond a standard decision tree). By the way, your link to openai is broken and took me to Amex!

    • Thanks I will fix the link

    • That’s the big question at the moment – nobody knows how good it is because they don’t fully understand what it is doing. In some respects it gets stuff wrong and seems very far away from being useful, in other tasks frighteningly good. It is currently an Artificial Narrow Intelligence (ANI), but the goal it to reach AGI and that’s the race all these organisations are currently in. We just don’t know how close any of them are.

    • Also don’t forget the stuff we see in ChatGPT is GPT version 3.5, not GPT 4 which has made the big leaps forwards. If you want to view any real capabilities look at GPT4

      • Just had a play with GPT4 and Dalle-2. I got terrible images (may be how I worded them) and the chat was wildly inaccurate, which made me realize the biggest weakness in these systems. I remember many years ago going through a period where I was tangentially involved in numerous press stories and in every single case there were significant inaccuracies even when I was supposedly being quoted verbatim. I learnt then that nothing in the press (and by extension the internet) is strictly true or accurate. Hence my adherence to the maxim that ‘Nothing is True, Everything is Permuted’.

        • Just out of interest, what was the topic for the chat – just wondering what it is good at and where it is poor. Results are mixed, but overall it’s exceeding a lot of expectations. When I’ve tried it, obviously on computing subjects, it’s been bang on.

          • In the chat I asked if the artist/writer Brion Gysin had ever actually said ‘Everything is Permuted’. Gysin was big on permutations, but used the related phrase ‘nothing is true, everything is permitted’. It edged around the question and then fell hook line and sinker when I pointed to a genuine source for the quote which it then mis-attributed to Gysin. When challenged it then invented another book where the quote may have come from. I then asked for referencing and gave it another author (me) to reference and it came up with several books that simply don’t exist. It did, ironically, live up to the maxim (explicitly) that ‘nothing is true, everything is permuted’. In other words, it’s pretty easy to mis-direct it. It did come up with an ok response when I asked for a 500 word essay on William Burroughs (basically an on-the-fly wiki entry).

        • That’s crap. It did indeed get that wrong. I’m wondering if it knew it got it wrong at any point and then just refused to back down? There have been instances of that occurring.

          • There seems to be a genuine issue with asking it to reference or give sources for quotes etc. It makes up titles and authors to look realistic (ie it does what it does), but they are entirely constructed. This does mean it will be easy to spot cheats in university exams etc!

        • What alarms me is the willingness of GPT4 to lie. It’s faux intelligent at the minute, we think, but it does pass lots of ‘intelligence’ tests and several really hard exams like the Bar exam, but as you say that’s not intelligence. I’ve no doubt that right now it is ANI (Artificial Narrow Intelligence), but there is quite a wide scope for that – Siri and Alexa are considered ANI too, but they can’t do the complex tasks GPT4 can do, so maybe we need different bands within ANI to determine just how capable it is. The rush a the minute (with massive amounts of investment too) is because the likes of Microsoft, Google, etc all believe we are fairly close to AGI (Artificial General Intelligence) and might even already be there with some tasks – AGI is intelligence on par or greater than that of a human. With so many people working on this, it’s only going to take one change and that’s it, we have AGI. Beyond the references and quoting academic texts, have a look at what it is doing – there’s loads of YouTube videos and even documentation from from OpenAI, including a big list of risks they see from GPT4. Microsoft, etc aren’t investing billions in this because it can say what the weather will be slightly better than Alexa can. There’s a lot more going on behind the scenes. There’s also a lot of predictions that are total BS too, but I personally think we are all making the mistake of comparing this to a human. It’s never going to be human, and may never show empathy, etc, but it will be intelligent. It’s something we all need to keep an eye on over the coming months.

  3. Thanks Shaun, insightful as always

Leave a Reply