BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Not All Algorithms Are AI (Part 3): General Intelligence

Forbes Technology Council

Founder & CEO of Vital, AI-powered patient experience. Formerly Founder & CEO of Mint.com. 10 patents in algorithms.

“Not All Algorithms are AI” is a three-part deep dive into the evolution of algorithms, what brought us to generative AI, and how to understand what this technology will do for your business.

In Part 2, we learned how generative AI uses context for human-like results. A prompt “write like Shakespeare,” or your conversation history with ChatGPT is “context.” While today’s AI understands context, it has no concept of time, nor does it face a penalty for incorrect decisions.

Artificial General Intelligence And The Rise Of Conscious Machines

Contextual decision-making is one key to unlocking the final phase of algorithms: artificial general intelligence (AGI), sometimes known as “strong AI.” This is human-level intelligence: self-supervised learning, creativity and building mental models. It is machine consciousness.

Today’s generative AI architectures are a dead-end toward AGI. While impressive to the public, for those who understand what’s beneath the covers, they are still statistical. At best, they are like our own subconscious: Vast stores of data whose “dreams” make sense only in short segments.

While AGI doesn’t exist, here are six clues to how and when it will be built.

First, it will be hierarchical: a layer of sensory perceptions (like pixels) building to ostensive concepts like a chair (things you can point at), building to abstract concepts (fairness). The concept of a “chair” is unitless; you can recognize a tiny dollhouse chair or a chair the size of a mountain. It is the relationship between the parts that is abstracted.

Second, consciousness infers causality natively - through its very structure. A ball is released from your hand; the eyes perceive the movement, and that movement is always down. It is a sequence of events. Sequence gives both a sense of time and causality, neither of which today’s AI architectures know.

A casual graph also allows “visualization” and goal planning. As humans, we can traverse our mental graph: if this happens, then three things might happen next. This allows us, in a very real sense, to predict the future.

Third, consequences matter. As a child, pronouncing a word correctly elicits joy from your parents, doing something wrong, a frown. Reinforcement learning requires an evaluation of right and wrong. Today, the dirty secret of generative AI is that it’s actually two AI models: the first is the raw subconscious “memory” (text and images from the Internet), and the second is the model for “human preference”. This second AI has been trained by thousands of (typically low-paid) humans and serves as a “filter” to ensure the outputs “feel” intelligent. But it’s just a mask.

General AI must do this on its own. For that, we need the machine equivalent of pain and pleasure—perhaps a bonus or penalization in energy or computational capacity—to give consequence to action and refinement to thought.

Fourth, nothing exists in a vacuum. An AI must be given the opportunity to interact with its environment (or an equivalent simulation). A child infers gravity by knocking a stuffed animal off the couch repeatedly. AI must iteratively build a model of how it can interact with the world.

Fifth, you need a model of the consciousness of others (whether AI or human). Certain birds, for example, know that if another bird sees them hide a nut, that resource may be stolen. Putting yourself into someone else’s shoes is necessary for both adversary and empathy.

Lastly, AGI requires self-directed attention. At any given moment, a human has tens of millions of pixels, sensory inputs from skin and organs, continuous sound, taste and more. And we discard 99% of it. Our focus is selective. It’s based on the goal at hand. In some ways, this is true consciousness, a being aware of itself and able to direct itself. Practically speaking, this requires inferring a model of your own consciousness. Psychologically, one might call that model an “ego” or self-awareness.

In the book “Thinking Fast & Slow” Nobel-prize winner Daniel Kahneman differentiated System 1 vs. System 2 thinking. Today’s generative AI is, at best, “System 1”: quick, instinctive and automatic. It is closer in capacity to a fish or reptile. “System 2” thinking is slow, conscious, logical and creative. Creativity, I believe, is your self-directed focus moving forward your mind’s causal graph. This allows you to predict multiple possible futures—even those you have never seen before.

In the words of Meta’s Chief AI Scientist, Yann LeCun:

“LLMs have a lot of accumulated knowledge but very little intelligence.

An elephant or a 4-year-old is way smarter than any LLM.”

It seems logical, then, that artificial general intelligence may be trained like a child: with vision, hearing and touch, with “values” of what is or isn’t acceptable in a particular job. Training “AI children,” whether in an artificial world or a constrained environment, will be big business. The big advantage over human children will be that of software: infinitely transferable and replicable. Only one machine, in one place, once across all time, needs to learn something for other machines to take advantage of. By contrast, we as humans are both a blank slate, a life’s wisdom erased through death.

The story of my professional life has followed the evolution of algorithms. And it may drive more of your life than you are aware of. The two sustainable moats of the tech industry are algorithms (Google, OpenAI) and platform/network effects (Meta, AirBnB). Without mastering algorithms, your fate may be that of Docusign: great people, great execution and nothing preventing HelloSign, EchoSign or other copycats.

For me, I’m all in on AI. At Vital, we are using a dozen algorithms to improve patient health and hospital experience, to detect severe infections like sepsis hours before they become dangerous. Soon, we will be using AI to predict your personal “health future”: an AI to see how surgeries, medications and dietary changes will alter your illness and life expectancy. In a literal sense, we must all embrace AI or see our own wisdom perish.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Follow me on Twitter or LinkedInCheck out my website