In addition to the weird and wacky solutions peddled by Silicon Valley, AI also has a number of actual uses… ones that are, in themselves, useful. Chatbots, smart cars and heuristic programming are all examples of AI in the wild.
AI, otherwise known as Artificial Intelligence, can be simply understood as intelligence that is exhibited by machines. Defined by the Oxford Dictionary, intelligence is confirmed to be: “The ability to acquire and apply knowledge and skills”.
By this definition, artificial intelligence is a non-organic means of acquiring and applying knowledge… but ask a computer a question it has not been programmed to answer, and it will stumble, sputter and spark. AI, at least our definition of it, is often seen to exceed the creative ability of the technological world today.
AI is, by the traditional understanding, a thinking, feeling computer. A robot that’s able to empathise; something that is required to obey Asimov’s laws of robotics. AI, and by this we mean the singularity event, is still some time away. Scientists in China are making great leaps towards this with quantum computing, but the dream of true AI however, is still a long way off.
Inc.com included their own timeline around the lifecycle of AI:
• 1950: Alan Turing predicts a computer will emulate human intelligence (it will be impossible to tell whether you're texting with a human or a computer) "by the end of the century."
• 1970: Life Magazine quotes several distinguished computer scientists saying "we will have a machine with the general intelligence of a human being" within three to fifteen years.
• 1983: The bestseller The Fifth Generation predicts Japan will create intelligent machines within ten years.
• 2002: MIT scientist Rodney Brooks predicts machines will have "emotions, desires, fears, loves, and pride" in 20 years.
There’s an argument to be made for reaching for the stars. Fiction is often found to mimic reality… we would suggest that the opposite is true. Think of the flatscreen TV, mobile phones and wearable technology. Back when Star Trek first aired, these were firmly set in the realm of future tech… but today, they’re a normal part of everyday life (and some would say the iPhone mimics the Star Trek communicator itself).
In business, however, the reality of AI is some way from that seen in science fiction. Machine learning can’t be classified as a form of intelligence - instead, it’s more similar to clever coding, a means to allow a machine to do more for us, automate actions, and do what a computer does best – compute.
It’s true, yes, that a large number of AI projects have failed… but each failure has acted as a building block for bigger and better things. A remarkably human trait.
One AI system, built by Northpointe, was designed to predict the chances a previous offender would commit a crime… but the software was accused of a racial bias as non-white offenders were marked as a higher risk.
In healthcare, however, we have a greater amount of data – cold hard facts with which AI can be fed. Indeed, AI is now available to turn standard fitness trackers into “life-saving medical devices”. A wearable EKG monitor, designed to assess heart health, was recently noted as the first ever (official) medical use of wearables and the AI algorithms that power the benefit.
The opportunity for AI to spot trends, crunch big data and replace our traditional input processes is very much here. The ability of AI to replace what hundreds of humans could do is very much at the forefront of AI today… Just ask Alexa, Siri, Google home or Cortana to translate something.
AI isn’t the iRobot pinnacle of computing… but it is a remarkably effective tool for specific and well-defined challenges. If there’s one thing for certain, it’s that if people can think of something, then with enough time they can make it happen.
What’s your experience been of AI so far? Does it live up to the hype? Get in touch with us and let us know