Artificial Intelligence is evolving right now – here’s how

Aurther Shoko Avatar
AI, self driving cars, Google, Artificial Intelligence

image credit –

This is part of a series on Artificial Intelligence. If you are catching it for the first time I’d recommend that you start here where I introduce the idea and provide some instrumental background.

In the last article, I talked about the usefulness of thinking about artificial intelligence in its chapters.

True, you could start biting this elephant anywhere and anyhow. The phases approach is just my recommended way of understanding, with better clarity, the goals and ultimate intentions of AI.

So let’s pick it up.

Ever heard of intelligence calibers?

A combination of things has driven the renewed interest in artificial intelligence that can be seen all around . Among them are common human problems like disease.

The internet combined with superior computing power is also key. Huge amounts of data, known as big data, is also behind mad AI efforts.

Disentangling this data is worth billions of dollars to all kinds of industries including cities and governments.

Considering that AI has always been around since the 1950s, I would say the most important driver right now is big spending by wealthy tech companies.

Since the 1950s, AI research has had its ups and downs commonly known as AI winters due to a combination of disinterest and funding problems.

Now that funding is no longer generally government sourced and interest is clear, virtually all stops have been pulled.

Artificial intelligence has entered mainstream media reporting thanks to tech companies when promoting top AI system releases such as Amazon Echo or Google Assistant.

Though exciting to many of us, this type of AI is just but a fraction of the whole AI idea. It represents the early days which also means it’s a shallow end.

Futurists have identified three phases or stages in AI evolution. There is the first intelligence caliber. This is also known in leading AI research circles as Artificial Narrow Intelligence or simply ANI.

Virtually all AI that has gone on in the past and is out on the market today falls under ANI. It is narrow artificial intelligence because it is limited to specific industries, products and services.

This makes ANI ubiquitous and in just about every modern gadget and electronics found all around us.

Google’s autonomous car is an ANI system, so are aircraft flying systems, search engine technologies, stock market systems, Japan’s industrial and home robotics or Google’s AlphaGO which recently beat Grandmaster Lee Sedol at the game of Go.

The list is endless. These are all good ANI systems at what they do.

First intelligence caliber systems though becoming increasingly interconnected via the internet still remain largely independent.

For sci-fi types who fear robots taking over the world, the good news is that at this point ANI systems are incapable of doing anything of this sort.

Autonomous cars still don’t communicate with aircraft flying systems nor do they talk (for now) to your fridge at home or your swimming pool.

Still, there’s a lot that these systems can do. Government and municipalities like the City of Harare which has a vision to attain world class city status should apply these systems.

Artificial General Intelligence (AGI)

Big tech companies like Google or Tesla are not satisfied with just giving you better search results or a self-driving car. They are refining their AI efforts to improve the value proposition of this technology.

For example, Google is more useful when it is able to read web pages before ranking them while understanding the content written on those pages just like a human being and being intelligent enough just like a doctor to say this is good or bad medical advice.

Tech will gain this ability by acquiring the ability to read, comprehend and derive meaning intelligently from existing big data.

We humans take for granted that we can suddenly decide to run, take a sharp turn, and at the same time know how high and far to jump to avoid a gully while turning the head and rolling eyeballs at the same time.

We are able to do this thanks to general intelligence that comes with being human.

This is what inspires the next phase in AI which is also known as 2nd intelligence caliber. This is Artificial General Intelligence or simply AGI.

AGI represents the arrival of machines which match the intelligence of humans. The goal, therefore, is for machines to attain general intelligence.

This entails machine intelligence with inherent cognitive functions just like humans. Cognitive functions are mental functions such as thinking, emotions, feelings, vision and sensation.

AGI will seek to make machine intelligence common.

Cognitive functions will make a better Siri or Google Assistant. Japan with a dwindling workforce due to old age sees machine intelligence one day compensating for its growing workforce issues.

The thousands of startups that have emerged in the last decade or so are working in any one of these areas. One of the most valuable pursuit right now being image recognition.

Image recognition is essentially the sense of sight. For self-driving cars to work and get regulatory approval they just must have perfect ability to see and companies like Tesla are aware of this.

The ability for machines to see will eventually become an integral part of the future AGI human-like machine. This, of course, sounds both spooky and exciting depending on your philosophy and general worldview.

Even though tech evolution is galloping there, getting to AGI is actually not that simple.

Barring AI winters that may pop-up along the way, if you love predictions, that date is currently put at about year 2030. My four year old daughter will just have turned 18.

Teaching computers to learn

What makes AGI less simplistic is the challenge of how to make a computer gain mental functions. There are all kinds of concepts on the table on just how to achieve this feat.

These range from brain simulation, introducing evolution to computers, to building computers that do their own AI research, code themselves in real time changing their architecture on the go as their circumstances change.

This is also known as machine learning and it’s already rapidly developing.

Some futurists believe the most viable way to achieve machine intelligence is through pirating or plagiarizing the human brain.

This could be achieved through social constructivism, an idea that human knowledge is constructed through interaction with the environment.

You play soccer or cook Sadza not because you were born doing these things but because your societal circumstances constructed you into a soccer playing or Sadza cooking individual.

AI scientists believe computers through complex software and hardware interaction can be made smart, just like baby humans and grow up to play soccer and cook Sadza.

Social constructivism would facilitate computers learning new things and skills from scratch by strengthening (when they get things right) or weakening transistor pathways when they get things wrong.

Over time, this builds a repository of inherent intelligence just like humans grow to become intelligent.

In my next installment, I will talk about the hardware and software challenges that currently stand in the way of the road to AGI and how AI researchers are developing workarounds.


  1. @code_writer

    Useful article, thanks. looking forward to more articles like these

  2. Peter Raeth

    If there is no human intelligence using a tool then the tool is useless. Artificial intelligence is just software running on a box filled with on/off switches. It has no intelligence in and of itself. Rather, it is the humans who use the tool who apply whatever intelligence is employed. With the right tools, people of action can make a difference.

  3. George T. Bandure

    This makes good reading. Artificial intelligence may not be that benign especially with the possibility of programmes becoming cancerous, or more appropriately non hacker proof. Inasmuch as there are forces for good, there are also forces for evil which may take advantage of the ignorance of the generality of the populace and create mutations, variations and clones which may do exactly the opposite of the intended.