Authored By: Jonathan Spence
“I’ll be back” – the infamous words of arguably the most famous & bad-ass cyborg around. Arnold Schwarzenegger does seem to be making a move back to acting after a stint “governating” California so those words could arguably hold true! For those who don’t know, cyborg is a word to describe a robot which includes both biological and mechanical components, in the Terminator’s case he was a robot on the inside with skin on the outside. Now I know you may be asking, how is this relevant to an Xtracta orientated post but stick with me!
“Terminator” illustrated a future where robots take over humanity. A central networked system being designed as the ultimate tool of war by the USA, called Skynet, becomes “self aware” and uses robots being designed by man to destroy him. As time progresses and the war moves on, Skynet starts to invent new robots and technologies to the point of inventing time travel. Talk about smart robots!
Back to the present day, with the work we have been doing with artificial intelligence, I no longer see a lot of concepts science fiction has developed over the years as that far-flung. The whole concept of computing (and a concept which has not changed since mechanical computing’s inception more than 100 years ago) is absolute logic. A computer operates by saying “if this then do X OR if that then do Y”. Of course it is a little bit more complex than that but for almost everything we do to interact with computers today, such logic is used absolutely. Artificial Intelligence tries to make this more “fuzzy” – it’s an area of computing which tries to mimic the thought patterns and ability for decision making with inputs that aren’t 100% set in stone.
Artificial Intelligence Today
The term Artificial intelligence or in acronym form A.I. is a term gaining greater prevalence through a variety of computer aided applications today. It’s a term used very loosely and fair enough, A.I. is a very open concept with many applications meeting the criteria for it.
Generally speaking we see A.I. touted applications today as those which can deal with quite specific requirements e.g.:
- Image recognition
– Facial Recognition for smartphone unlock
– Facial Recognition for Age/sex determination
– Industrial applications such as interacting with components in a manufacturing process - Text/data discovery
– Web search Engines
– Turn-it-in plagiarism style checkers - Robotics | Air-borne “drones”inertial
– Military applications
– Amazon quadracopter couriers - Robotics | human assistive robots
– Aged care robots (especially coming from Japan)
– Automated vacuum cleaners
– Automated lawn mowers
In most of these examples, the applications are very limited in scope. The root cause of this limitation lie in “features”. “Features” are things that could be different but follow a same pattern. E.g. let’s take facial recognition and robotic lawn mowers. For facial recognition eyes may be a feature of a face or the verge between grass and other surfaces could be a feature for the lawn mower. Each feature is relatively similar at a higher level but could have variability such as eye colour or the the layout of the grass<->other surface boundary.
So facial recognition tools have a large number of preset features to look for in a face e.g. it will already know that humans have 2 eyes, a nose, mouth etc. Thus it needs to just look at how these features differ between subjects to reach a conclusion as to whether there is a match or not. Or automated lawn mowers which just need to determine the surface they are travelling on is grass and then map out the various grassed areas using GPS/inertial measurement to build “maps”.
These are all very limited in their scope as the “features” that each application looks for are set in stone and any kind of deviance cannot be overcome automatically. Thus the trick for such systems is to use many features and thus if one is unavailable, fall back to the others. This is the approach we have taken so far with Xtracta; we provide the system with a massive number of features to use to capture data automatically which may or may not apply depending on the document. It’s a good approach and much better than “templating” as features can be a lot more high level and interact. E.g. for invoices (currently our most popular document type), do they follow similar patterns in terms of how numbers interact, or data is typically laid out – do things like the country of the supplier or particular organisation sending these to us affect this?
The question is, is this really the A.I. we think of? It is, but more of a basic level A.I. – it can indeed handle variance so has some “intelligence”, but when you compare that to say, a person who can learn to identify the features themselves, you see a deviation in what “intelligence” could be defined as.
Machine Learning
Learning is a key part of intelligence as we know it. From a bird being able to locate its nest by learning its surrounding to a lion cub learning to stalk prey based on mimicking those with more experience in its pride – learning forms part of how an intelligence is enhanced and is, in any extended form embedded into most animals. Indeed it is required, for birds every nest would be in a different place and for lions, stalking prey requires consideration of so many variables – from the type of grass to the light and beyond.
In the examples of basic artificial intelligence aforementioned, machine learning forms part of this. From a facial recognition system which must be able to learn (and thus compare) many samples to know what is and isn’t a match to an automated lawn mower being able to learn the boundaries of a lawn. This all involves learning which constantly evolves (what happens if certain little people leave their toys in the middle of the lawn one day yet certain bigger people remove them the next!).
In Xtracta, learning is key – especially when we start generating many, many complex relationships between our huge feature set. But compared to perhaps purely logical approaches (such as templating), it means we can service basically the whole world without having a team of people sitting and making templates for (in the invoice example) every supplier who exists around the world. Could you imagine it?? It just wouldn’t be logistically possible! The cloud approach has been the absolute backbone of this, because we can merge (and have already merged millions) of literally every document we have ever seen into a single unified learning pool – we can leverage far more information and build more wide ranging feature relationships than if we were just installed as an on-site application in islands for every client.
Learning to learn
This is really the holy grail of the field of A.I , a concept known as “strong” or “general” artificial intelligence. This is a system which has machine learning but also has the ability to learn “features” or the things that it uses to make comparisons and draw conclusions. Imagine a scientist on the road to discovery. He/she thinks of a hypothesis which has never been considered, if through tests, that hypothesis turns out to be true then a new “feature” is learnt by that scientist and probably humanity as a whole.
Or even at a more basic level, children who try out different things to gain their parents’ attention and find those which elicit the greatest response from their parents. E.g. they may try crying or may try drawing on the walls. If they see that there is a pattern of drawing on the walls which annoys Mum/Dad the most they then know how to get attention more effectively in the future.
In the examples that were given, this ability to try new things or “features”, then find which values of those features (e.g. crying loudly is better than crying softly) works the best shows general intelligence.
The Future
Whether we develop general intelligence is yet to be seen. In my opinion it’s inevitable. At the end of the day, the way that nature works can always be replicated by man and general intelligence is in nature (i.e. seen in we humans). Our brains use what are called neurons to operate, that is an electrical pulse is fired down a massive number of neurons with all of our thinking and actions. There are estimated to be around 100 billion neurons in a human brain which interact to allow our consciousness and decision making abilities.
In terms of raw numbers, computers are at this level in terms of their ability to handle logical decisions i.e. yes/no. The key difference is neurons have differing strengths, structures and form what are known as synapses. Synapses are interaction between individual neurons that strengthen as learning takes place, providing a complex web of interactions. Computers are not like that, their “neurons” are simple yes/no which must use software to learn relationships – whereas a brain has a physical change. So short of a computer which can reconstruct itself as it learns, this must be emulated in software.
And this can be done, but requires many more logical operators, e.g. perhaps to emulate the slight differences which exist between neurons, then there needs to be millions of extra operators which can emulate these. Perhaps even computers will get to the stage where they can experiment on their own “brains” with even more slight deviations than exist with human neurons. Perhaps a new approach that doesn’t use synapse-style thinking can work. The key no matter what though is for that basic system which can become “self-aware” and can enhance itself just as we can by learning.
The big risk is that computers will be able to out-power humans both in terms of knowledge or input gathering (we can’t exactly plug our brains into the internet or a billion sensors whereas they can) and in terms of processing that information (we can’t add more RAM to our brains!). Humans may become obsolete or inferior.
So it remains to be seen if this happens. Our lives will get better and easier as A.I. improves (removing the need for manual data entry anyone?). But many futurists think it will occur and so do I – there will be a point in which humans are overtaken by our creations.