Back in 2005 I first heard the term ‘smartphone’. Instantly I imagined having my own personal ‘Kitt’ inside my phone.
Unfortunately the ‘smartphone’ was not all it was cracked up to be…
Smartphones allow users to access an internet browser, play videos and download content. But in real terms, that’s not smart – I mean, if anything it should be called a ‘media-phone’ – it does one thing (or a group of things within that area) pretty well.
So then, the question occurred to me: What would a device or computer have to do to be ‘smart’, and will we see it in our lifetimes?
Alan Turing, most famous for cracking the Enigma Code during WWII, was fascinated with the idea of machine intelligence. In 1950, he proposed (in a paper called ‘Computing machinery and Intelligence’) that if you could have a prolonged and in-depth conversation on a range of topics with a machine, without realising at any point that it was a machine, then you would have to say that it was intelligent. This is known as the Turing test.
Of course, a ‘smart’ piece of machinery does not need to pass the full Turing test, but it may have to demonstrate abilities including: comprehension; reaction; real-time problem solving; and the ability to reach a conclusion which the user could not reach by themselves (at least in the same space of time).
Arguably we have already created ‘intelligent’ machines – those which can achieve feats of mental agility far beyond the human mind. Deep Blue and Watson are two such examples from IBM, but many more exist, including common software which uses GPS tracking or self-driving cars, which display a remarkable amount of versatility combined with machine intelligence (ie. processing speed).
However, while they are exceptionally good at the specific task they have been programmed to do, they could hardly be considered ‘smart’. To qualify, they need flexibility. Being smart requires understanding and ingenuity (otherwise classed as imagination, unpredictability or originality).
Perhaps this distinction is more easily made with something we all know.
A vespa, or wasp, is quite a feat of engineering – it can move in ways we simply can’t, it can reach places we can’t go, and it has sensitive antennae which can receive data which is invisible to us. It can also react to circumstances, protect itself, seek out food and even communicate with other members of its species. In many ways it is intelligent (relative to other insects), but you would struggle to define a wasp as ‘smart’ in any sense other than design. Of course it might do something unexpected – such as decide to fly at a person’s face rather than flee when threatened, but this is not really a conscious choice – it is just a reaction, a built-in technique to maximise survival developed through evolutionary ancestry.
How can we make this distinction? Well primarily because we know a wasp lacks imagination – the ability to consider and reason – instead it is driven by instinct and experience. Of course when you get to larger animals, such as dogs and cats, then the word ‘smart’ begins to appear more generally. You can have a smart dog and a dumb dog, for instance.
The question then, is ‘what is the difference between being smart and being dumb?’
Well, typically speaking, a ‘dumb’ person is often content to settle for what little they know about the world, with no innate ambition to improve or reimagine their approach. Smartness, on the other hand, describes someone (or something) which is good at regularly thinking of new (or well considered) solutions to problems, or is proficient at quickly spotting better ways of doing something, often which yield unexpected rewards.
At last I think I have arrived at a definition. A device (a phone, computer or other machine) can be considered smart if it is proficient at finding new solutions to existing problems, preferably which would be impossible (or at least very difficult) to discover by normal means.
For a device to be smart, it must have at strong base of abilities. Firstly to comprehend complex inputs like speech, sound, patterns or numbers; it should react to context, offering a solution or sets of solutions which meet the demands of the user; it must be able to take on new knowledge and develop understanding to alter existing solutions as required; and it must be able to offer a fully considered conclusion that does not have negative impact on the user.
This last bit is important – any misjudgement, miscalculation or error in processing would greatly impact the resultant definition. In other words, if the solution doesn’t work, the machine quickly goes from being smart and useful, to dumb and pointless. So, I would also add that any such computer needs to consistently offer better solutions than currently available, and that these solutions should be optimal at least 90% of the time.
For those who would question this logic I say this: “Of the smart people you know, how often are they completely right on the first time of asking?” My guess is that, if you are being truthful, it’s less than 90% of the time.
So being smart does not always mean being completely correct – it means using reason and logic, alongside context and understanding, to increase the chance that any effort will have a greater payoff. The phrase that covers this in business is: ‘work smarter, not harder’.
This phrase is often attributed to those with more experience, who have most likely realised the pitfalls of doing things in a particular way. One final point to make then may be that smart people are not often not inherently smart – it requires learning important lessons from failure. In this case, perhaps a smart device would also be able to learn from its own and other peoples / devices past errors, or predict success, much like we do in our own minds when we try to imagine if something might work.
There are two key challenges to developing smart machines.
You will notice in my definition that a key component of being smart is the ability to be inventive, and we see a problem straight away. Computers, in the general sense, are not designed to be inventive – they are designed to do exactly what you told them to do, no more and no less. In order to actually create a system that can think in new ways then, we must go back to the drawing board and build it with more ability to think creatively, and allowances to be wrong (sometimes).
Google is great. In fact Google has changed my life. It is, however, a great big metaphor for what a machine can do, and also its limitations. If you ask Google a question and it has an answer, it will provide good results. If you do the same and it doesn’t have an answer, it will provide bad results (or no results at all). It cannot draw on its experience and think around a problem because it doesn’t understand the complexities. In most cases Google has no idea why you are asking, nor how it (without the aid of humans) could ascertain the difference between a good answer and a bad answer.
A system that is able to interpret needs and understand why a user is asking the question is a large jump from the query / command based format we are used to.
These challenges are well understood within computer science. In this decade, it is the sole focus of many software companies to move beyond the methods of the past, and to develop new techniques based on human brain functions, otherwise known as ‘cognitive programming’.
Of the many examples out there, I believe IBM are the closest to replicating the specific features I have been talking about.
In 2013, they released the first cognitive computer chip as part of the SyNAPSE project. This project has the long term goal of building a computer system which is able handle inputs just like a brain. It has over 4000 processor cores – complex structures which each contain 256 inputs and 256 outputs, enabling it to mimic 1,000,000 neurons and 265,000,000 synapses, all of which are able to independently compare and contrast new information with existing data.
Being such a radical design, they have even developed a revolutionary new programming language to handle the tricky idea of probability. Yes, that’s right, probability, meaning this is the first computer that makes good guesses based on the information available, it is NOT right all of the time. Recent figures show that, in an environment where it has had little or no practical experience, it is around 80% accurate (in this case recognising the difference between various objects in moving videos). This is the first step away from FORTRAN (the computing method standard computers use, designed in the 1950’s) and towards a world with more capable machines worthy of being called ‘smart’.
My own guess is that intelligent machines will be accessible (ie. affordable)0 around 2050. More than likely, however, we will be able to create some illusion of cleverness before then. I say this because there is a strong trend of software and hardware integration right now, meaning more and more computer programs and devices are able to communicate with one another.
This growing sector is colloquially known as IoT or the Internet of Things – a complex network of sensors, hardware and software which work in unison to maximise efficiency. This level of cohesiveness and flexibility would mimic ‘smartness’ quite well, even though, by its very nature, it has no imagination or ability to think creatively.
If this can be considered smart, much like a ‘hive mind’ seems to be intelligent, then I suppose the answer to the question is yes – we will see smart devices in our lifetimes.