Super Intelligence (SI)

What is it?

Super Intelligence.. sounds very sophisticated but actually, it is really sophisticated. So let me start by giving a warm welcome to you all and potentially new visitors. I’ve been reading a lot and researching lately and Super Intelligence has been something of interest to myself. It has actually been quite an interesting topic to read about. So without further ado:

How many sci-fi series have you seen that showcase some type of SuperIntelligence? Probably quite a few (to those avid sci-fi lovers). In SOME films such as Terminator, they come to deplete the human race. Others come to give us a helping hand, like Wall-E (quite adorable little thing isn’t it?).

Of course, these robots are quite fictional characters but will the future bring SuperIntelligence? If it does what would they look like and when will they appear?

 

In SuperIntelligence, we learn about the topic of AI (Artifical Intelligence). But more so, we discover the moral, ethical & safety concerns that we need to address; and the best way to create an intelligent machine.

 

 

Some history

Did you know the pace of major revolutions in the tech-world has been increasing over time? For example, improving at a snails pace of a few hundred thousand years ago, human technology would have needed one million years to become economically productive enough to sustain the lives of an additional million people. This number dropped to two centuries during the Agricultural Revolution in 5,000 BC. And in our post-Industrial Revolution era, it shrunk to a mere 90 minutes.

A tech-advancement like the advent of SuperIntelligent machines would mean a radical change for the world as we know it. But where does technology stand at present?

We have already created machines that have the capacity of knowledge to learn and reason using information from humans. For example, consider the automated spam filters that keep our inboxes free from annoying mass emails and save the important ones.

When it comes to building a SuperIntelligent machine that can learn and act without the need of a human, it may still be many, many decades away. But advancements are happening quickly and we must not shy away from the fact that it could be upon us faster than we may think. Such a device would hold a certain value over our lives and its intelligence could be dangerous.

History of Machine Intelligence

Since the invention of computers in 1940, scientists have been working to build a device that can think. One major advance in Artifical Intelligence (AI) are man-made machines that mimic our own intelligence.

However, the story begins with the 1956 Dartmouth Summer Project, which endeavoured to build intelligent machines that could do what humans do. Some machines could solve calculus problems and others could write music and even drive cars. But there was a roadblock: inventors found that increasing the complexity of the task, would result in more information the AI needed to process. Hardware to take on such difficult calculations was unavailable.

By the mid-1970s, interest in AI had faded away. But in the early ’80s, Japan developed expert systems. Which were rule-based programs that helped decision-makers by generating inferences based on data. However, this tech introduced a problem as well: the huge banks of information required proved difficult to maintain, and again, the interest dropped.

The ’90s witnessed a new trend: machines that mimicked human biology by using tech to copy neural and genetic structures. This process brings us up to the present day. Today, AI is present in everything from robots that perform surgeries to smartphones to a simple Google search. AI tech has improved to the point where it can beat the best players at chess, Scrabble and Jeapoardy!

But even our modern tech has issues: such AIs can only be programmed for one game and there’s no AI capable of mastering any game.

However, our children may see something much more advanced – the advent of SuperIntelligence. In fact, according to a survey of international experts at The Second Conference on Artifical General Intelligence at the University of Memphis, in 2009, most experts think that machines as intelligent as humans will exist by 2075 and that Super Intelligence will exist after another 30 years.

SuperIntelligence likely to emerge in 2 ways

It’s clear that mimicking the human intelligence is an effective way to build intelligent tech. So, while some scientists are favouring in designing a synthetic machine that simulates humans (through AI..), others stand by an exact limitation of human biology, a strategy that could be accomplished with techniques like Whole Brain Emulation (WBE).

What are the differences between the two?

AI mimics the way humans learn and think by calculating the probability. Basically, AI uses logical reasoning to find simple ways of imitating the complex abilities of humans. For example, an AI to play chess chooses most optimal move by determining all possible moves and then picking the one with the highest probability of winning the game.

Therefore, an AI that does more than just play a game of chess would need access to vast data. The problem is that present computers can’t process this amount of data fast enough.

But are there ways around this?

One solution could be to build “the child machine” – Alan Turing quoted. A computer that has basic information is designed to learn from experience.

Another could be Whole Brain Emulation. WBE works by replicating the entire neural structure of the brain to imitate its function. One advantage this method has over AI is that it doesn’t require a  complete understanding of the processes involved, behind the human brain. For example, scientists could remove a stabilized brain, scan it and translate it into code. But I guess we’ll have to wait for that one. The tech necessary for this process (high precision brain scans) likely won’t be developed any time soon. But, one day, someday, it will 😁

The fact that there are many paths that lead to SuperIntelligence should increase our confidence that we will eventually get there.

BE CAREFUL what you wish for

You’ve probably heard it a million times over, but there is wisdom in being careful of what you wish for. While we may be striving to attain SuperIntelligence, how can we ensure that the technology doesn’t misunderstand its purpose and cause unspeakable devastation?

The key to this problem involves in programming the motivation for SI to accomplish various human-given goals. Say we designed a machine to produce an object, but what’s to prevent the machine from taking its task to an extreme and sucking up all the world’s resources to manufacture a mountain of objects.

This is quite tricky, in fact, while AI is only motivated to achieve the goal for which it has been programmed, an SI would likely go a bit overboard and crazy.

But there are solutions to this problem. For instance, SI, whether it be AI or WBE, can be programmed to learn the values of a human on its own. For example, a Super Intelligent machine could be taught to determine whether an action fits within a human’s moral and ethical values. And rightly so when it detects that it doesn’t fit within these values then it will abort the mission. However, the machine would calculate whether a proposed action is inlined with the goals of a human. With experience, the AI would develop a sense of which actions are considered to be normal and the actions that are considered to be crazy.

Power to the Superintelligent future!

It’s quite clear that an entirely robotic workforce would completely transform the economy, as well as our lifestyles and desires; as machine labour becomes the new, cheaper norm, the pay of workers will drop so low that no human will be able to live off a paycheck.

However, it depends where that money ends up. Also depends on whether SI is designed by a single exclusive group or is the result of a slow collaborative process. If the former turns out to be true, most people would be left with few options for income generation. It is likely that they’d be renting housing to other humans or relying on their life-savings and pensions.

Better safe than sorry

It’s clear that development of SuperIntelligence comes with a variety of safety issues and, in the worst case scenario, could lead to the destruction of humankind. While we can take some precautions by considering the motivation for the SI we build, that alone won’t suffice.

What will suffice?

Considering every single potential scenario before bringing a hyper-powerful force like SuperIntelligence into the world. For instance, imagine that some sparrows adopted a baby owl. Having a loyal owl around may be a nice one; the more powerful bird could guard the young, search for food and do any number of other tasks. But these great benefits come with a great risk: the owl might one day realise it’s an owl and eat all the sparrows. The end.

Therefore, an intelligent logical approach would be for the sparrows to design an excellent plan for how to teach the owl to love sparrows, while also considering all possible outcomes wherein the owl could become a demon.

Problematic scenarios

The problem is knowledge. As deep and hard that cut may be. Scientists would forgo safety to speed up their process and wouldn’t share their work with others. That would mean if an SI project went horribly wrong and threatened humanity with extinction, too few people would understand the machine well enough to stop it.

On the other hand, if governments and institutions and research groups join together: they could slowly build a safe and highly beneficial Super Intelligent machine. That’s because groups could share their ideas for safety measures and provide thorough oversight for each phase of the design.

 

WELL DONE!

You made it to the end! So I congratulate you. Hope you enjoyed it and stay tuned for more.

 

Peace

 

 

Leave a comment

Your email address will not be published. Required fields are marked *