There’s an awful lot of hype these days about AI and the shrill hyperbole of the mainstream media seems united on “look what an amazing thing we’ve achieved (to sell you) today!” and “only a matter of time before we’re making intelligent robots” with an optimistic “our future is a symbiosis with AI, e.g. super-cybernetic organisms” or pessimistic “fear our robot overlords when they turn on their creators!” depending on the publication.

Machine learning in particular gets a lot of press and it’s presented as a kind of synthesised creative thinking that’s driving data acquisition using the benefits of organic adaptability alongside all the advantages of perfect memory and superhuman calculation speed. This is very misleading. It’s painting a bogus expectation of AI among the general public that’ll take attention (and funding) away from genuine breakthroughs.

Much has been made of the recent victory by the Deep Mind computer, using machine learning, over the human Go champion and lifelong exponent of the game. This is presented as profound – not simply the computer’s mastery of Go but its ability to teach itself the game (as it’s reported) from the ground up. The story is presented as a demonstration of early stage synthesised intelligence, a great leap forward in AI.

AI may look impressive, and in a lay sense, AI may be useful as an advanced tollset, but nothing achieved in the field can yet be called fundamental. Deep Mind (and other AI) haven’t made a breakthrough creating intelligence in any meaningfully human-transcending sense.

At best AI is faking it. It’s being coded to look like it’s intelligent by dint of its activity resembling some processes a human being knows would require thought to carry out.

What matters for AI and machine learning is how the particular section of reality has been broken down, its meta accurately distilled into coherent binary, e.g. computer code variables and functions. The data is then parsed through coded procedures to drive responses. This parsing must have a sufficiently high fidelity in its YES/NO or ON/OFF ruleset model. None of this, however, describes anything new.

Go is a suitable example. Media reports “a computer learns to play Go better than the best human players” adding that “a computer uses machine learning to self-evolve from beginner to beyond expert at the game, and not even its programmers know how it achieves that.” Is that so? These statements are deliberately mysterious, similar to the journalistic techniques often used in tabloids to equate astrology as an equivalent to astronomy.

After a little research, it turns out that the parameters of the Go game, as well as the rules that govern it, can be comprehensively and systematically broken down into a complete set of variables, conditions, and functions. The meta-reality we would describe as ‘to be playing Go’ – how to win, for instance – must be coded, instance by instance, to build the environment of the Go “machine learning” program layer. These steps could be developed to generalise a diversity of simpatico rulesets, and thus it might be said that the artificial intelligence program is able to “learn” how to play a thousand board games. Is it learning though? It’s still limited to learning within pre-coded parameters for whichever games that fall within the range of the existing code’s ability to resolve and parse.

Once this fragment of the AI program is complete, things become simple as “machine learning” comes into play. AI can now play out a million games of Go or Chess or whatever, starting as a beginner and applying with perfect memory the improvements it “learns” from each game. It doesn’t need lateral thinking or intuition or human intelligence to beat us because it runs at speeds faster than we do, but it gathers data from each game-experience, and recalls every improvement and sorts the outcome possibilities perfectly. This eventually builds superior gameplay to any human champion.

None of this amounts to a generalised intelligence or something that we might recognise as approaching four-dimensional human intelligence. No doubt, we’ll continue to progress this field, and more slices of reality (and its functions) will be broken down, codified and parsed by custom-built hardware operated by artificial intelligence using machine learning. It would be no surprise to see robots and AI in every aspect of everyday life.

However, this isn’t true artificial intelligence. There’s a red herring debate about the ethics of using robots as “slaves” by presuming that at some point in the development evolution, consciousness may emerge once the functional intelligence becomes sufficiently complex. This will certainly be a concern if artificial intelligence and machine learning mean what most people presume. This means we will conceivably start creating intelligence as minute as a snail or fish, then move up the ladder of intelligence until we’ve reached humans and beyond. As it is, however, what we’re creating are automatic toolsets for increasing complexity of mechanism and actions operating in a wider range of conditions; but it still boils down to representing reality through integer variable arrays parsed by mechanisms as mathematical functions generating binary conditions that drive action.

If a human being can’t define reality to enough ‘decimal places’, its fidelity passes objective reciprocity as AI gets constructed handicapped. Objective reciprocity can be defined as the potential to parse back and forth between “reality” and the complete coded detail of the “representation”. In the case of a game, things are easier: its rules must be broken down into variables, functions and conditions. The complexity is finite. Without this distillation though, there’s no starting point for “artificial intelligence”, and machine learning can’t initiate it.

At best, it is a case of the human brain being able to distil the meta of whatever piece of reality is being automated in details that amount to complete understanding; including the parameters of AI choice. Then and only then can this machine learning become meaningful.

The human brain is slower at calculation than a computer so while the design, codification and implementation of the artificial intelligence are slow, it takes the human brain a longer time to work through a myriad of possibilities, analyse then represent them as precisely distilled variables and mathematical functions. The ‘learning curve’ a human coder must follow is a creative challenge from the outset.

Once the code is running and the bugs are fixed, AI can run through with perfect patience. The computer can take up the task. AI is able to perform its functions at an exponentially faster rate than any human being. Add machine learning to the mix, and you put in another layer of design, codification and implementation whose addition is a toolset for the computer AI to create, parse, and resolve its own data; and follow predefined rulesets to use this parsed data to improve subsequent iterations of its designated function.

In a way, the Go player against the Go artificial intelligence (with machine learning as its backbone) is an unfair contest by default. It plays entirely to the AI’s strengths and the human being’s weaknesses. The only unknown factor in such an encounter is the skill and attention to detail of the AI’s coders. In a way, the surprise should be if a human player EVER beats a well-coded artificial intelligence and this will be a short-lived triumph as the AI code can make sure the loss is never repeated with 100% clarity and recall.

Humans will have to surrender supremacy at board games – and indeed most feats of dexterity and processes are very likely to be codified accurately. Driving vehicles, most blue-collar jobs, many white-collar professions and aspects of almost every job there is, artificial intelligence and machine learning will execute it better. At best, jobs will use AI to replace many human ‘parts’ and extend the efficiency of the rest. Society will continuously transform.

This does NOT mean we have created intelligence though. Crudely, all of the above – all current artificial intelligence and machine learning – it amounts to no more intelligence than a pocket calculator. However impressive the scale, however, physically splendid the application, however complex the calculations: nothing in today’s world nor anything currently in development truly deserves to be called “artificial intelligence”. There is no example of “machine learning” that transcends human analysis, abstraction, creativity and imagination. Not yet. Don’t be dulled by the hype.

CNBC Article “Robots Have Mastered The Art Of Painting!” – this is what A.I. looks like, according to A.I.

Leave a Reply