Some very basic thoughts on Artificial General Intelligence
Blog
Science
Philosophy
AI is everywhere. I've even started asking ChatGPT for solutions rather than visiting Google. GPT may not always be bang on, but it's (for me, at least) usually more bang on than an aggregation of results based on key words and overall page rank.
The science behind Artificial General Intelligence (AGI) fascinates me. On one hand, I have no idea where to begin my thought process. On the other, it makes sense that the human brain is a pretty reasonable starting point. But, and it's a big but, is that over-complicating the problem?
AGI, put simply, is the ability for a computerised system to process unfamiliar knowledge based upon learned knowledge. Crucially, knowledge that the system has acquired itself. It stands to reason that any system capable of AGI would soon surpass human intelligence since that system wouldn't necessarily have the capacity for conflicting emotions to disturb the process. As brilliant as people can be, there are always outside factors that can impede behaviour; illness, for example.
This is nothing new. Alan Turning, the grand master code breaker who contributed to the early resolution of WWII, asked the question "can machines think?" With his 1950 paper "Computing Machinery and Intelligence", he paved the way for some brilliant thought and has surely opened the door for many brilliant minds operating in the sphere of AI, today.
So how do you create a system capable of thinking for itself such that it becomes powerful enough to take on problems for which it has no training?
My first thought is, "what if every thought, every action, were built upon an inherent set of core emotional skills?"
For example, every human has the ability for empathy, sympathy, like, dislike, fear, confidence, confrontation and reasoning. All of which go a long way to defining somebody's persona. There may be more core skills that I haven't thought of, but the point is, each of those is quantifiable and in-built into everybody.
From these core emotional skills, we can extrapolate and find hundreds (thousands) of branching skills, each of which can be traced back to a core emotional skill.
From there it is 'simply' a case of quantifying inputs to determine outputs?
This may be way off, but it stands to reason that as we learn we tap into the core emotion of, for example, 'like'. We may even be driven by one or more of the other core emotions which in itself would be stored in memory relative to those emotions.
Here's a clumsy example.
I have a fascination with the rise of Hitler's Germany and the outbreak of WWII. If I were to pick out some key core emotions here I may well select Dislike and Fear. The more I learn of the 3rd Reich, the SS and the atrocities of the Holocaust, the more I develop an understanding for a dark period in history that I label 'dislike'. But beyond the word dislike I can branch out and start to look at quantifying periphary emotions such as 'caution'.
Such emotions are present in everybody in varying amounts. They are, therefore, known and quantifiable. Based on the amount of learning you could soon begin to imagine a datamap of intelligence which in itself would present a vast repository for recollection and problem solving.
But, the great question is, how do you instruct a machine to act upon such knowledge to solve a problem? A solution that, crucially, must be free from any error in judgement.
These are just some initial thoughts and could be a million miles away from being useful. I'm a software developer but not a scientist. Email me with your own thoughts. I'll post any comments that promote discussion.