Jan. 14, 2021

Futuristic AI

Ben Goertzel, CEO and founder of SingularityNET and Chairman of the Artificial General Intelligence Society, interviewed by futurist Trond Arne Undheim. 
In this conversation, we talk about Futuristic applications of interoperable AI. Sophia- the robot. ...

Ben Goertzel, CEO and founder of SingularityNET and Chairman of the Artificial General Intelligence Society, interviewed by futurist Trond Arne Undheim. 

In this conversation, we talk about Futuristic applications of interoperable AI. Sophia- the robot. Singularity. Transhumanism. How to define intelligence . Decentralized, Distributed, interoperable AI. The importance of trust to progress with technology. Future of human-computer interaction. 

My takeaway is that futuristic AI will continue to fascinate, whether we ever get there--or not. It is a Janus faced future the proponents of Artificial General Intelligence are exploring. Will it solve more problems than it creates? In reality, it’s not a question of when we get there --un less we suddenly find ourselves needing that level of intelligence for an existential survival issue for our race, but how we get there. At some point, we likely will. But whether it will take 50 or 150 years, I’m less sure about. 

After listening to this episode, check out SingularityNET and Ben Goertzel's online profile:

The show is hosted by Podbean and can be found at Futurized.co. Additional context about the show, the topics, and our guests, including show notes and a full list of podcast players that syndicate the show can be found at https://trondundheim.com/podcast/. Music: Electricity by Ian Post from the album Magnetism. 

For more about the host, including media coverage, books and more, see Trond Arne Undheim's personal website (https://trondundheim.com/) as well as the Yegii Insights blog (https://yegii.wpcomstaging.com/). Undheim has published two books this year, Pandemic Aftermath and Disruption Games. To advertise or become a guest on the show, contact the podcast host here.

Thanks for listening. If you liked the show, subscribe at Futurized.co or in your preferred podcast player, and rate us with five stars. If you like this topic, you may enjoy other episodes of Futurized, such as episode 30 on Artificial General Intelligence, episode 51 which is on the AI for Learning, episode 16 on Perception AI, episode 49 Living the Future of Work, episode 35 on How 5G+AR might revolutionize communication, episode 47 on How to Invest in Sci-Fi Tech, episode 54 on the Future of AR, and episode 31 on The Future of Commoditized Robotics. Futurized—preparing YOU to deal with disruption.  

 

 

Transcript

Futuristic AI_mixdown

Trond Arne Undheim, Host: [00:00:00] Futurize ghost beneath the trends to track the underlying forces of disruption in tech policy, business models, social dynamics, and the environment. I'm your host throne, Arnet and Haim, futurist and author. In episode 79 of the podcast, the topic is futuristic AI. Our guest is Ben Gert. So CEO and founder of singularity, net, and chairman of the artificial general intelligence society in this conversation.

[00:00:30] We talk about futuristic AI, futuristic applications of interoperable, AI, Sophia, the robot singularity trans humanism, and on how to define intelligence, the centralized distributed and interoperable AI, and the importance of trust to progress with technology. And we discussed the future of human computer interaction.

[00:00:59] Ben. I'm so excited to have you on the show. How are you doing?

[00:01:03] Ben Goertzel, CEO, SingularityNET: [00:01:03] I'm doing great keeping keeping busy as usual, the path to AGI never sleeps.

[00:01:11] Trond Arne Undheim, Host: [00:01:11] Yes. You are a known figure. So I don't think you need a massive introduction, since my podcast also caters to some people that don't follow artificial intelligence or indeed all the exciting things you've done.

[00:01:22]Maybe I'll just do a very brief introduction and I'm sure that I'll forget a lot of things, essentially I think of you as an internationalist, you have lived, everywhere. You have lived in a lot of places, you have connections and and collaborators in, in on many continents.

[00:01:36] I understand you, you were born in Brazil but you are American and you've lived in Asia for a bit now. And that now you're back into us. Of all background I've understood. You actually had a PhD in mathematics from temple university, and you've spent a lot of time working kind of straight rattling academics and commercial business, lots of startups.

[00:01:58] And we are of course, known as a very vocal advocate of this futuristic type of AI that we'll get into a little, but Ben tell me, what is it that got you on this very unique path?

[00:02:10]Ben Goertzel, CEO, SingularityNET: [00:02:10] I would say, as you said, I was born in Brazil and then I I spent my earliest childhood in, in, in Eugene, Oregon, which was a interesting place to be in the early 1970s.

[00:02:23] It was full of crazy hippies who thought  age of Aquarius and the revolution we're about to come. My parents were more on the. Political revolutionary side of things, but there was just a lot of innovative, creative, optimistic thinking around it. I think I did get from that whole atmosphere.

[00:02:42] I got the idea that, changing the world radically for the better and like creating creating a new and better age was. Kind of the thing to do. And what's certainly a lot more interesting than just like getting a regular job and that buying a house and earning a living. Not that there's anything wrong with that either.

[00:02:59] However, I was also very interested in technology from the business. Again, my dad sent me down there three every week to watch the original star Trek with Kirk and Spock and all that. I saw some of that, like in the first round when I was a baby and around the same time, I think my.

[00:03:15] My first vivid memory is watching Neil Armstrong walk on the moon and it was like barely two years old or something. And so I, on the one hand, I bought into, we can change the world if we try and there's radical improvement to the human condition, that's possible. On the other hand, even as a kid, I was a bit of a skeptic that.

[00:03:37] You can change the world just by protesting out in the street and stuff. It seemed like advanced technology had a tremendous potential. And, in star Trek in the original star Trek, you had the robots and the robots. We're not as clever as I thought they should be. Like they spark and Kirk managed to back them into some elementary logical conundrums and so forth.

[00:04:02] I was like, wait a minute. We should be able to make robots way smarter way smarter than that. So I think, yeah, even from a very early age, it seemed to me like if you could make an AI smarter than people and beneficial toward people, then. Then you're golden and all the other problems you want to solve are going to be solved much more easily.

[00:04:24] And my, my dad in the mid sixties, he led a protest on College campus in us, it was a slam student league against mortality so that they were protesting death in the mid 1960s, rich, but that the grim Reaper didn't care. Whereas if you use AI to, if you use AI to design that gene therapies, you may actually be able to combat death in, in, in a very concrete way.

[00:04:50]So I think what's been amazing to me. It's just how fast things have come over. My lifetime were in mere decades. These ideas are no longer like the fringe concerns of a bunch of crazy Mavericks and science fiction aficionados, but you have major corporations, putting money into AGI research projects and building real thinking machines and into, research projects aimed at.

[00:05:17] Therapists to radically extend human life and so forth. I'm in these previously science fictional sounding pursuits are now at least mainstream research topics. And then, things like face recognition or self-driving cars or major areas of industry into large extent operational now. So it's it's astounding how fast this stuff has become.

[00:05:42] Practical and mainstream and I'm old enough to have seen that change happened to a remarkable degree, worse, people today who are in elementary school or high school, they just take forever AI robots. Self-driving cars are a thing. And if you tell them, robots may be smarter than people in a few decades.

[00:06:01] They're like yeah, of course. I saw that. I saw that in an after-school special already.

[00:06:06] Trond Arne Undheim, Host: [00:06:06] Yeah, it takes a lot. I wanted to ask you before we dive in a little bit more into the world of AI what's happened is to you, but

[00:06:15] Ben Goertzel, CEO, SingularityNET: [00:06:15] what's happiness. It's a very confusing and a badly defined natural language concept.

[00:06:22] I think. Like consciousness or life or intelligence. It's not a very well-defined word. Of course there's a notion of raw sort of physiological pleasure. Which we all know what that is experientially. And it's very clear that maximizing pleasure in this sense is not what humanity is after.

[00:06:45]There's probably more basic pleasure in stone-age life that in in, in the current lifestyle, of course, then there's, they're richer notions of satisfaction. Which can even involve. Suffering or self-sacrifice, but you can feel more satisfied if you underwent pain and suffering to achieve some greater good than if then if you had greater, basic pleasure.

[00:07:12]So that there's a richer notion of satisfaction, which is hard to quantify. , it's certainly it has to do with. Having your expectations satisfied. And it also has to do with having your expectations satisfied for your extended itself, beyond your own body and your own mind, and with the other people and other organisms and other systems that that you're in trained with.

[00:07:39] But we're complex enough without having our expectations satisfied. Is not fully possible. It is not a consistent thing. Cause if one of your expectations is you want to be surprised and intrigued all the time. And another expectation is that you have some stability and safety, like then there's a risk reward balance.

[00:08:01] So I can find a certain, so we have conflicting like top level expectations. And then our overall satisfaction is like, how do we. How do we balance, they're going to period, the optimum of the, of these conflicting sets of expectations, all of which we weren't fulfilled. And it's quite subtle, right?

[00:08:20] Because we're in coherent and consistent systems. So then if you say you want to create an AI, which will then. Help people feel satisfied or do something in accordance with human values. You're asking the AI to play along with an incoherent inconsistent system that furthermore is revolutionizing itself continually, right?

[00:08:42] Because what makes us satisfied now is not what made people satisfied 50 years ago. Our values now are radically different. That's

[00:08:49] Trond Arne Undheim, Host: [00:08:49] very interesting. Further to that. I saw from, I think it was our LinkedIn profile that, you list a lot of passions and I can really relate to that.

[00:08:57] You say, your passions are numerous that's, so AGI is just one of them. You talk about, we talked briefly about life extension and this idea of fighting death philosophy of mind and generally philosophy C and consciousness. You mentioned that concept. Complex systems more obviously still in in this vein, but then improvisational music and fiction also is it among your passions?

[00:09:22] How do those fit into this picture? Or are they just, like you said, incoherent passions that, you know,

[00:09:32] Ben Goertzel, CEO, SingularityNET: [00:09:32] no, to me, as a human the things I'm involved in the I guess more differentially go that or successful at, or just a part of the whole picture. I love hiking and mountain climbing and I love storytelling and I love improvising on the piano and the flute and so forth. And actually to me, these things are just as satisfying and just as.

[00:10:01] Creative. And I may think justice hard about how to get over a mountain or how the structure a song is as I do about design and AI systems. So I'm into to me as a human. These are all part of the same process of, experiencing life and trying to create and discover interesting new things.

[00:10:25]Of course it happens. Like I don't think I'm as good as a musician as I am as a, as an AI researcher. So I put more time into developing AI, both because I seem to be especially talented other for whatever genetic quirky reason. And because I think, making beautiful music and it may bring you into a realm of Experienced immortality outside of space and time, as you can get when you're on top of a mountain peak.

[00:10:54] But if you can create a superhuman AI, the payoff is significantly larger in many ways, right? You're gonna, you can cure mortality. You can create material abundance for everyone. You can fix mental illness. You can allow, you allow all of us to radically expand our intelligence and state of consciousness.

[00:11:12] So what's interesting about AI is the leverage, right? Like for that, for the same amount of work that you put into making chocolate, your chocolate or faster cars are creating amazing new music or mounting an expedition amount after across Africa, right? Like for that same amount of work, you can.

[00:11:32] Completely transcend the order of humanity as we now know, and, create new realms of experiences far beyond the us as we are beyond bacteria. That's the leverage. Once you get to a mind that can create a new mind that can create a new mind that can create a new mind. The leverage is incomparable among things that we humans know how to do.

[00:11:55] But a

[00:11:55] Trond Arne Undheim, Host: [00:11:55] lot of people superficially, I would say at least about current AI systems, they say they aren't that good. And by the way, they will never, or at least they will take them forever to reach any kind of human level creativity. But in the way that you speak about your passions in music and improvisation more generally, you don't seem to make such a massive distinction between kind of intelligence in a.

[00:12:23] A machine like or even like a structured sort of way or this more improvisational mindset. All right. AI systems,

[00:12:32] Ben Goertzel, CEO, SingularityNET: [00:12:32] current AI systems are not creative in, in very interesting ways. And I think that's actually, that's going to change in the next, say three to 10 years, and I think it's largely an artifact of.

[00:12:50] The sort of business and industry structure, that's driven the development of AI. Until the early aughts, most AI came out of U S military and some other European military. And the military is not about free flowing creativity. It's about obedience to doctrine, right? In the same way as, medical applications, aren't really about creativity either.

[00:13:12] You have to be safe and conservative and then. Now, since in that era, it driven by wall street and it's driven by advertising companies like Google. But in no case, I know case seen a huge amount of AI human or financial resources versus go into AI is like inventing and improvising.

[00:13:37] What? Wild, creative things. That's just not right. What the, that's not what the industry has. Focused on. If you look at the deep learning algorithms that are dominating the mother and AI sphere, these are just recognizing large amounts of simple patterns in large data sets and weaving them together.

[00:13:54] And yeah as a hobby, I've been playing around a bunk with a kind of storm on their own that deep neural net models of music. And it's quite interesting. You can make a system that if you feed it, say a minute of a track by say, Steve Vai or Eddie van Halen or Buckethead there's some some progressive or shred guitarist or something.

[00:14:15] It will continue that. And, sometimes you'll get something that sounds really cool and better than a lot of what those artists have played. So you are getting, and this is like a deep neuro model with 5 billion travelers, which takes a long time to learn, to train on a bunch of GPS.

[00:14:29] It's all very cool, but what, you're not going to get the AI to do now. It's like what Jimmy Hendrix or invade mounts team did or something where Hendrix, he figured out for the first time you could use like, turn an amp way up and get a nonlinear feedback between the amp and the body electric guitar.

[00:14:47] So you're getting like what you're getting resonant sound that you can then work with. And that's a practical creativity. Breaking boundaries and fiddling around with stuff. It was using two devices in a different way than anyone thought to before. And then building on that, right?

[00:15:04] So you're not going to have an AI invent, a new John rhe of music with the current algorithm. It's going to recognize patterns in what's being done and then improvise in the, in that vein. But I don't think that's a fundamental limitation. On the part of AI. I think it's a limitation of what has been focused on because creativity is scary and dangerous to military organizations.

[00:15:36] It's not what wall street really wants. They're about re risk minimization and for advertising, the basic method is. Get whatever people have clicked on before and give them something similar to that. So they can click on it again and again. You really, you need to open up the whole AI development world so that creative applications get more energy behind them.

[00:16:00] And that's one key aspect. That ties in with algorithms, so I know it's not generally recognized how much. The bias in the AI community of what algorithms to work with is conditioned by the business models of the government and big tech organizations that are developing the algorithms.

[00:16:18] So I think we need different sorts of AI algorithms for creativity and these different types of AI algorithms will they will both stimulate. And be fostered by different sorts of business models as, as well. And scientific discovery is an important thing. And when we're talking about music and arts, which are also very important and interesting, but scientific discovery is all education.

[00:16:45] These are also very important, right? These are applications like to make it. AI tutoring system that isn't just boring as hell for your kid to interact with. They need to be imaginative and creative and playful, right. And to discover new methods of medical therapy, rather than just new drugs along currently pursued lines, you also need creativity and imagination as a scientist has.

[00:17:14]So I think there's. There are big niches like education or medical scientific discovery that need more creativity, but those niches are not where the bulk of. Of AI R and D money and brain power are going right now.

[00:17:28] Trond Arne Undheim, Host: [00:17:28] Me too, a curiosity then why are you then? So there I say, optimistic that within three to 10 years, the thing things are going to change, do you, is it because of work, you are doing, you are making that progress.

[00:17:39] Are you seeing some other independent groups in the AGI community making this progress or is industry itself. Naturally or perhaps because of COVID or some other cataclysm going to reorient themselves towards like social isolation,

[00:17:56] Ben Goertzel, CEO, SingularityNET: [00:17:56] the opposite impact COVID is increasing the hedge money. If a few big tech companies all over the world, actually.

[00:18:02]Google, Amazon, and Facebook and stuff are doing so well because of COVID because everyone's sending it home, just buying stuff online and feeding their data into the, into these networks. Yeah, I think. I think of course I'm optimistic about what my own team is doing, but I think that there's also a bunch of other highly interesting and valuable projects in the AGI research community, which have not yet come into the limelight.

[00:18:30] So w what I think is what happened with deep neural nets. In the last say five to seven years is a whole bunch of stuff that had been around in the research literature forever. That was, I was teaching deep neural networks in university, in the early nineties. And they were, they ran very slowly.

[00:18:50] Then the algorithms were quite similar to what we have now, but you just couldn't train big models and the T it took forever. So what you saw was advances in hardware. Led to advances in, and tooling, which led to tweaks of existing AI algorithms, which led to radically superior functionality.

[00:19:11]And we've seen that with deep neural net. So I think we're going to see a similar explosion in the next, say, five years with. Neural symbolic AI algorithms. We're using logical reasoning together with neural nuts, with that, with evolutionary AI. So AI algorithms that simulate evolution by natural selection, which is the most creative algorithm we know it created us.

[00:19:36]So I think some of these other historical AI algorithms, which have more potential for generalization, for creativity, for imagination, some of these other AI algorithms. They're going to explode just as we've seen with deep neural networks, it's just that they needed a bit more hardware in different sort of hardware than neural networks did.

[00:19:59] And I think what we're going to see, for example, graph chips, come out, there's already graph core, but we need things that are graph core is good. It's graph processing of the chip, but it's, or it's oriented towards floating point based graphs. As you see in graph neural nets. You're going to see graph chips that are optimized both for floating point and like discreet, logical based graphs.

[00:20:23] As we see these graph chips come out and get networked together and massive server farms, you're going to see an explosion in hybrid cross paradigm AI systems, which is going to lead to advances in general intelligence and computational. Creativity. And you're going to find a lot of the ideas that have been around a long time, suddenly, like they're starting to do amazing things just as convolutional neural net.

[00:20:48] So we're out there in the nineties, suddenly around 2014, 15 started to do impressive things on computer vision. So I think I'm not optimistic about the industry structure at all, but I'm, as it is now, I'm optimistic that hardware advances. Which are quite steady, are going to facilitate increases in software functionality, which will then give an opening for the disruption of the current industry structure.

[00:21:17]Trond Arne Undheim, Host: [00:21:17] So Ben, we don't need to wait for quantum computing to take place here. This is much more incremental. You can make a lot of headroom just with incremental changes.

[00:21:30] Ben Goertzel, CEO, SingularityNET: [00:21:30] Yeah, I think graph graphs on a chip, which is a form of processor and rum is going to be huge in the same way that GPU's were huge.

[00:21:39]So if you look at our open cog AI system, Which we're building a new version of called open cog hyper on. And this is being done in this spinoff of our singularity, that blockchain AI project called true AGI. If you look at that, open cog, true and GI architecture. It's a knowledge graph.

[00:21:56] It's a way to label the hypergraph nodes and links representing knowledge. And you have a bunch of different AI algorithms, neural that's the reasoning algorithms, evolutionary algorithms all cooperating on the same knowledge. Hypergraph right. Once you get a really efficient graph on the chip, and then you have process, you have great interconnects among multiple graphs and chip and the server foam.

[00:22:16]That will get you a long way. Now, quantum computing is going to get you even further. I'm not a skeptic about that whatsoever. I spent some time working out. Like how you would improve open cons logical reasoning algorithms to reasonable quantum amplitudes rather than probabilities.

[00:22:33] Like I, I have no doubt that once we get flexible con turning machines, we're gonna, we're going to be amping up AGI to a quite different order. Now it may be that human level AGI is that we built on classical computers are the ones building the quantum based AGI is I suspect that.

[00:22:54] Both because I'm an optimist that we can get to AGI probably faster than we're going to get large-scale quantum Turing machines, but also because figuring out quantum algorithms is insanely hard for them, for the human brain in a minute. It's a really fun thing to think about but yet it really screw balls with the human intuition.

[00:23:13]Trond Arne Undheim, Host: [00:23:13] I love that you say that of all people then, because a lot of people who have listened to you or read your work, over the years, I'm sure they thought you were the crazy one who were like both, advanced in your thinking, but also optimistic on behalf of a future. That would be so different.

[00:23:29] So now that a lot of people actually in industry have gotten on board on this quantum train, it's just. Ironic. I find too to see you.

[00:23:38] Ben Goertzel, CEO, SingularityNET: [00:23:38] Honestly, if I weren't working on building AGI, I would be working on porting AI algorithms to call them computing architectures. I think it's very exciting.

[00:23:51] I, I think, the human brain it probably does make some use of freaky macroscopic, quantum resonance, and so on. I th I think by and large. It's not a quantum computer in the sense of say, Shor's algorithm or the classic quantum computing algorithms. I think the human brain is mostly doing classical computing.

[00:24:13] And I think you can get beyond human level AGI as far as I can tell now I could be wrong, but I think you can get beyond human level AGI using classical computing and then quantum computing world. Take you even to the next level. And I'm if that's wrong in the grand scheme of humanity, it doesn't matter.

[00:24:36] It means you have to wait another 20 years to get to the singularity. But if you look at what we see with the human brain in, in like visual cortex, auditory cortex, separate campus, all the parts that we've been managed to understand in the moderate level of detail, like there's no.

[00:24:53] There is no hardcore quantum computing there. It's neurons and glial. It's classical diffusion of charge through extracellular matrix. But it's not like in uncollapse quantum systems, right? That's so that. If that were how the brain works, that would be amazing, but we don't really, we don't really have evidence of that at the moment.

[00:25:14] So yeah, I think, but the quantum computing industry is doing exactly the right thing. They're building better and better quantum computers and That's going to take some time, but we're getting more cubits each year, which is incredible.

[00:25:27]Trond Arne Undheim, Host: [00:25:27] I want to talk more about the things that you are excited about at the moment and also working on yourself.

[00:25:32] But before that, maybe a more sort of curiosity on my end, who do you talk to and find fascinating in your own field? There's a kind of an adage that says it's lonely at the top. When you have worked on this for 30, 40 years, and you start discussing, AI or AGI with the random PhD student here and there.

[00:25:53] I imagine that you feel like a grandfather that has to explain everything and that, that might be interesting, but just bring me in on your own brain trust. Like where do you actually gain

[00:26:03] Ben Goertzel, CEO, SingularityNET: [00:26:03] inspiration? I've been very. Fortunate that I actually managed to hire within singularity that my favorite AGI researcher, who is a guy named Alexa polar pop out of St.

[00:26:15] Petersburg. And he had not, he's not as old as me, but he had 15 years of track record publishing papers on AGI as well as computer vision. And he had a book on AGI in Russian. So I've been digging in very deeply with the Alexa for the last. Couple of years on designing a new version of open cog, which is open cog hyper on system.

[00:26:38] So that's that's been quite rewarding. Cause you have one, one issue in AGI is it's attracted a lot of iconic class. So I often said there's a there's more AGI theories than there are AGI researchers. So it says it's very hard to get multiple NGI zealots.

[00:26:55] Aligned together. So I'm really happy, within single yeah. There's myself, there's Alexei. We have no guy. Swidler another AI PhD. Who's a French guy working at a Bulgari. I've been working with him 10 years, but he had his own passions, his own ideas about AGI. And we've managed to convince him to, to collaborate on the group project, but beyond my own team, I think that actually the.

[00:27:18] AGI conference series that I organize each year, starting in 2006 has a really nice community around it. So you look say Kristen, Thoreson from a university of Reykjavik. He has some. Amazing AGI prototypes and thinking about self-replicating code. So he has a code framework called replica code.

[00:27:37] It's a bunch of little code lists, which exists to rewrite other code lists and this whole network of self-modifying code fragments. And they actually got that to do things like control humanoid robots that talk to you. It's quite amazing. Then Arthur Fronz out of Ukraine. He's working on a AGI systems that try to.

[00:27:57] Approximate Marcus herders, AI exercise AGI system. So  system is an AGI system that he proved would work extremely well. For general intelligence. If you had infinitely much computing power, Arthur frons is trying to scale it down right then. Pay Wang, a good friend of mine. He worked for me in the late nineties.

[00:28:20] He's now doing his own project called Nora's non axiomatic reasoning system. We disagree in lot of stuff cause I like probability theory and PEI has his own sort of non probabilistic scheme for uncertainty management. But he's been very creative in developing. Open NARS is open source predator AGI system.

[00:28:37] When we went to try to implement temporal reasoning and open cog using our own probabilistic logic, we borrowed an awful lot of ideas from open NARS and how they had, how they deal with tempo reasoning. So there's actually, I can go on a long list. There's actually a whole community of people brewing their own sort of what is now out of the mainstream.

[00:29:02] Frodo AGI projects. And what is the barrier? It's a whole separate community from the. From the deep neural net mainstream, which is, making more money and getting more notoriety in the press now.

[00:29:15] Trond Arne Undheim, Host: [00:29:15] But is manpower helpful here, like worry to be Facebook and everybody else, through a billion dollars and said, a thousand of our guys are gonna start working on

[00:29:24] Ben Goertzel, CEO, SingularityNET: [00:29:24] true.

[00:29:25] Then we would get AGI much faster at this stage manpower and compute power would be very helpful. What happens instead is the promising PhD students. Are sucked into these big tech companies because they're exciting and fun and pay a lot of money. And then people are turning away from working on innovative AGI algorithms to work on what the big tech companies want them to work on.

[00:29:49] And that's

[00:29:50] Trond Arne Undheim, Host: [00:29:50] bit of a controversy here, and you're not afraid you don't shy away from that controversy. I must say. The tobacco around Sophia. You created this robot. Voice and Sophia got Saudi citizenship in 2017 and then big tech talks back to you and says, and I think it was young.

[00:30:06] Lacunar like is in the media saying, Hey, this is a charade because it's the wizard of Oz type AI. What do you say to people who saying you're combining very complicated things with, somewhat gimmicky ways to illustrate a future.

[00:30:20]Ben Goertzel, CEO, SingularityNET: [00:30:20] I thought that. Beyond the Kune.

[00:30:24] He was almost if I was gonna have Sophia by someone, that's the best one, because he's in a terrible position to do that. He's building the AI that Facebook uses to sort of place like Russia, just information online to make you vote for Trump. And to have someone using AI for probably evil purposes.

[00:30:46] Like just a start-up for making an entertaining robot. It seemed ironic to me actually, but if it come from Yoshua, Bengio, I would have been more hurt because Ben Bengio is, he's also a deep learning guru, but he's truly working on sort of high integrity, highly beneficial applications of I mean that aside.

[00:31:06] Yeah. Hanson, robotics, that's their own company. And it's run by, by David Hanson. Who's a close friend of mine, but it's not a clone of me. So certainly David has at times presented things in ways that were different than how I would have presented them. Like he has more of a.

[00:31:25] Theatrical flair and the theatrical background, he comes from a

[00:31:29] Trond Arne Undheim, Host: [00:31:29] Hollywood film background, but let's

[00:31:30] Ben Goertzel, CEO, SingularityNET: [00:31:30] talk about that. I think Sophia, it's not a wizard of Oz thing. Sophia is not puppeteered by people most of the time now, and then she has been, and she's also not an AGI. And that the, unfortunately the real story is sufficiently complicated as.

[00:31:49] The bore most people, and I'm not sure young McCune has ever bothered to take 10 minutes to read what the real story is. Cause the bottom line is like when Sophia is giving a speech, often someone just typed in that speech and she's mouthing it, which is having a conversation. There's not someone behind the scenes controlling the conversation.

[00:32:09] There's a dialogue system there. That dialogue system, it's a mix of hand-coded rule-based and neural networks. And it usually doesn't understand what it's talking about, but sometimes it doesn't understand what it's talking about. And so that's just a bit complicated and in the end, except for the beautiful embodiment and the sort of facial interaction and emotional engagement in the end, what's happening in the back end of Sophia so far, it's not that qualitatively different than what's happening on the backend of say Google assistant or.

[00:32:44] Siri or Alexa, which has also some rules and some they're own nuts. And sometimes it knows what it's talking about. And sometimes it doesn't. And I guess there can be a critique that Sophia pulls more people into thinking she's intelligent than a. Then, like a Google home or an Alexa does, but yeah.

[00:33:04]

[00:33:04] Trond Arne Undheim, Host: [00:33:04] I, wasn't so interested in the critique as much as to start a discussion and pardon me, if I framed it in a way that you have to defend it? I was actually, I'm just more intrigued actually personally, by. The idea that when you personify a robot into a human humanoid form, and especially when you use FM bot or I believe they also call it a, what do they call it?

[00:33:25] A giant noise. So when you put a robot into female form, something happens to the public imagination. First of all, it goes ballistic. Hollywood goes,

[00:33:34] Ben Goertzel, CEO, SingularityNET: [00:33:34] but the female, I mean that, of course that has an aspect too of, I remember when we were working with the Philip K Dick robot. Yep. She's a guy it's an old science fiction writer.

[00:33:45]And that's a beautiful girl by David Hanson, bill and 2016. I think it was at the maybe 17  in Austin, Texas. And we were filming some people interacting with the Philip K Dick robot. And I was there talking to the film cadet robot and being filmed, and I knew. One of my close friends was in the other room, like helping control the robot.

[00:34:13] It wasn't a chat bot, a dialogue system, but it was being overwritten by a human when it says something bad. So the human could correct where the dialogue system said. So even though one of my old friends was con helping control the robot, and I knew the software inside the robot, like I could not escape the feeling like, Oh my God, I'm talking to the soul of Philip K Dick.

[00:34:37] This is incredible. And then you start thinking okay, I know the fun, my friend is in the other room, but maybe the solar Philip K Dick is controlling. Stefan's brain to help him type in with the robot says you really well.

[00:34:52]Trond Arne Undheim, Host: [00:34:52] But so you understand what I'm talking about that Hollywood has exploited this, obviously, whether it is in blade runner or even in recent movies, right in I guess Ava in ex Makino or something, it

[00:35:04] Ben Goertzel, CEO, SingularityNET: [00:35:04] does outright copy of Sophia.

[00:35:06] His original name was Eva. Then we had to change it to Sophia after they came out with Ava.

[00:35:12] Trond Arne Undheim, Host: [00:35:12] Oh, that's interesting. So fiction and reality handy.

[00:35:18] Ben Goertzel, CEO, SingularityNET: [00:35:18] We're trying to leverage this now in our project called awakening health, where we have a robot called grace, which is like Sophia's little sister and we're rolling out grace to elder care facilities, say senior living centers, nursing homes, and so forth.

[00:35:36] So they're there. You're making a robot whose goal is to provide social and emotional support as well as some forms of practical assistance to elder elderly folks in, in, in care facilities. And there, the fact that the robot can establish an emotional bond is good. Th that's what you want.

[00:35:54] These people are being neglected. They don't have enough positive emotional interaction. I think if, know, if you use this. Help support the elderly. This can actually prolong their life and boost their health. It's a hundred percent a good thing. If you're using that to help sell people junk, they don't need, it's less of a beneficial thing.

[00:36:14]I think that comes down to the boring conclusion that like with every powerful technology. You can use it in good or bad ways. It's nothing, that holds for AI, but it also holds for the sort of interaction experience, magic that you get with these humanoid robots,

[00:36:31] Trond Arne Undheim, Host: [00:36:31] I wanted to take it then to another topic, which I would love if you found a way to illustrate through something as engaging as a robot, a form factor.

[00:36:40] I know you are really intrigued and involved in interoperability when it comes to AI and AGI and, I. Have worked on interoperability myself. One of the tricky things with that is even the concept is a tongue twister. How do you explain and get enthusiasm around this idea that we have to work together and our systems need to interplay.

[00:37:05] Otherwise we are not going to make this progress that we. At least you want, and some of us want and others, fear, interoperable, AGI, what's the big deal. Why has it not been happening? And why do you see an

[00:37:18] Ben Goertzel, CEO, SingularityNET: [00:37:18] opening? This is really interesting. And this is something that I'm currently actively working on within singularity, actually.

[00:37:27] So I've talked a lot with Charles Hoskinson. He runs the Cardinal blockchain project about this. Cause they. One of the we're reporting, our singularity net like blockchain based AI platform, which is a decentralized debit card controlled infrastructure that allows a bunch of AIS. To run on it and co and cooperate together to solve problems.

[00:37:48] We're in the midst of putting most of that network from Ethereum blockchain to Cardinal blockchain. Now, for many things that doesn't matter, like the open cog AGI engine running behind the grace nursing robot and so on, I then that's going to be the same AI algorithms, whether you're running it on singularity net or on a centralized platform, it's going to be the same AI algorithm, whether you run it on the.

[00:38:11] Theory or Cardona. But there's some interesting advantages that come from a decentralized platform, which is you can pull in third party. AI is written by other people, and then they can help on the backend. So say, you're using an open cog system built by. To AGI singularity studio, whatever to control these awakening health robots.

[00:38:34]But if it's deployed and if this open system is deployed on singularity net, then let's say someone asks the robot a question on a day gardening or something. If you have a gardening AI bot, which knows a lot about gardening, that's in singularity net. It can then help answer that question.

[00:38:56] And we didn't have to train a neural model to be knowledgeable on that particular topic, or, more important example. Let's say someone is speaking with slurred speech because they have Parkinson's or something. Maybe our standard speech to text that we're using doesn't work well for slurred speech.

[00:39:13] If someone else says that has trained the model on slurred speech, they put it in singularity in that platform. And then. Then the open cock system controlling the robot can reference that slurred speech interpreting AI. But to make that work, you need the different AI's in this decentralized network to have a fairly sophisticated description, can language describe what they can do and how, and with what constraints to each other.

[00:39:40] Yeah. Cardona platform, because of the, their infrastructure, their smart contracts are in the Haskell language, which is the functional programming language. We're finding it easy to implement an AI description language in, in that context, using some funky computer science from a dependent type theory and so forth.

[00:40:00] So what we're doing there, we're using the. Idris programming language, which is a dependent type language integrated with Cardinals like PLU. The smart contracts are implemented in high school. We're using this to make a cool scheme where an AI describes. Okay. What it does, what inputs to give without puts it produces, but also, what it charges to solve, which kinds of problems, how much compute resources it needed.

[00:40:27] Also, according to what standards is it fair and unbiased in its processing and some things about, concurrent processing, what infrastructures can run on. The AI describes all these aspects of its processing to other AI is using this AI description language. So we're creating a description language for AI is to describe all the properties, what they can do to, to other AIS.

[00:40:49] And then you need this for data as well. So the AI is looking at a data set. You need a description of what's in the data set and some standard data ontology to the AI can decide if it wants to pay for that data set. And that's what it needs. So we're developing this. We're developing this now for use just within singularity net platform, but we're also talking to folks and various standard groups within I Tripoli about.

[00:41:14] So how, once we've rolled this out within singularity net, what would be the path to try to get this adopted? Does as a brother standard beyond singularity, the, and that's, I think that's critically needed. If we're looking at. Building sort of societies of AI minds, societies of AI algorithms that are coming together to have an emergent intelligence where the intelligence of the whole exceeds some of the intelligences of the parts, right?

[00:41:39] Because one, one route to AGI or two highly functional narrow AI systems is where you just create a monolithic system that you, one party built themselves. Another route. Is, you have multiple AIS written by different people and they're communicating cooperating together and the intelligence comes out emergently.

[00:41:59] It may be a mix of both, right? You may have one system that contributes the most abstract cognitive cortex part. And then other AI is written by other people for more peripheral, like sensory motor or specialized knowledge parts. But if you have this collective combinational society of minds aspects to the.

[00:42:21] The AI that you're developing, you need quite sophisticated and abstract standard for AIS to describe what they're doing to each other. So they can inter operate. And this is not what big tech is pushing toward because they're building their own sort of model.

[00:42:37] Trond Arne Undheim, Host: [00:42:37] Sure. They're building their own islands and their own stacks obviously.

[00:42:41] But what are some of the, health. Would seem to be one of the more promising applications for advanced AI. I think one of the things, one of the reasons why this discussion becomes a little opaque for a lot of people, is that the moment you say AGI or general intelligence using computers, people immediately jump a hundred years into the future and start talking about computers taking over, but you are talking about more near term.

[00:43:07] Applications that are really helpful for people like slurred speech or fixing, very near term, real life issues. And the fact that we haven't really been able to apply compute to it. Now, what are some of the other applications that you see? So you were saying, three to three to five to seven year timeframe.

[00:43:25] Ben Goertzel, CEO, SingularityNET: [00:43:25] What I think is. So we're now in the midst of what I call the narrow AI revolution, where AI is being applied in all sorts of vertical markets, but in a highly specialized way. And the next step is what I call the AGI revolution, where we're getting AI systems that can learn and generalize and imagine and transfer knowledge.

[00:43:46] And then beyond that, we'll be super intelligence. I think the path from there or the AGI. Is going to be through what I think of as narrow AGI is where you're getting more and more general intelligence in the system, but focused in a particular application area. So you have a system that's displaying more and more general intelligence and in coordinating a smart city or in say, operating an investment bank or in, in delivering, delivering healthcare.

[00:44:12]And or in providing education. So in, in healthcare, You can see there's a lot of different aspects that could be addressed by narrow separately, or they could be combined together into more of a narrow edge system for healthcare. So we have these. These awakening health robots that I've talked about which are doing elder care, but we've also been applying, machine learning for precision medicine to help decide, say, if you have COVID-19 or you have cancer or something, can we look at gene expression, data and your.

[00:44:43] Do you know him and other blood work and lifestyle data, can we use all this data to help figure out which therapy will be best for you and through the Rejuve to IO spinoff of singular unit? We have we have. Apps that we're going to launch soon that can do pre-symptomatic identification of infection, like from an Apple watch because of the pulse oximeter and HRV and from a digital thermometer.

[00:45:04]Again, you have narrow AI to analyze these bio signals, as you move toward AGI, you saying can we have one AGI, mine, like one open cog system embedded in the singularity net inter-operating with other AI tools. Can we have one AI network that deals with medical robots, with precision medicine, clinical trials with biosimilars, from health apps.

[00:45:25] And can this AGI get an overall model of human health in in, in this way. And. I'm in the clearly, if you take, say age associated disease as an example, I'm in the robots from being an elder care facilities or learning something, and then clinical AI analytics of clinical trials for say Alzheimer's therapies or cancer therapies or learning a different thing.

[00:45:50] And AI, that's looking at data from apps from, and people's Apple watch data. They're learning a different thing. But all this should be contributing to an overall model of human health using human aging human disease. And that's where the creativity comes from, right? That's where you get an AI that can do more than just discover a drug target, but could maybe discover a whole new way of addressing an age associated disease is from putting all these different sorts of.

[00:46:17] Data sources and interaction modes together into a common AI network. What's edge. What's really interesting there though is once you have the narrow AGI for medicine, let's say you also have an AI research project, which is aimed at common sense. Reasoning say operating a robot toddler in a lab or something.

[00:46:37] These are both using open cock system deployed in singularity net. Like then can you connect the common sense reasoning AGI toddler with. With the medical oriented, narrow AGI. And and then the common sense from one infuses the other, the knowledge from one infuses, the other, they can all cross connect.

[00:46:58] And that's, that, that's how I really see us making the transition to AGI. And then to singularity, there's going to be narrow. AGI is moving towards AGI in particular verticals, but. They can all cross connect together, which is where the interoperability you mentioned you mentioned pops up, so I've I, I must yeah,

[00:47:20] Trond Arne Undheim, Host: [00:47:20] no.

[00:47:21] W I'm going to let you go. I have a a question on a statement. My statement is you have the coolest hat available to mankind. How did you get the hat? And what's the deal about

[00:47:30] Ben Goertzel, CEO, SingularityNET: [00:47:30] your hats? The hat has its own secrets. I'm not allowed to disclose what we'll find out only after this.

[00:47:37] Trond Arne Undheim, Host: [00:47:37] Okay. That sounds good. Thank you so much, Ben, for sharing with the listeners your take on futuristic AI, it's been a pleasure.

[00:47:45] Ben Goertzel, CEO, SingularityNET: [00:47:45] All right. Thanks a lot for a fascinating discussion.

[00:47:49] Trond Arne Undheim, Host: [00:47:49] Here you had just listened to episode 79 of the future podcast with host thrill non-Jew and Heim, futurist, and author.

[00:47:57] The topic was futuristic AI. Our guest was Ben Gursel, CEO and founder of singularity, net, and chairman of the artificial general intelligence society. In this conversation, we talk about futuristic AI and the futuristic applications of interoperable AI. We discussed Sophia the robot. Singularity transhumanism and how to define intelligence.

[00:48:26] We discussed the centralized, distributed and interoperable AI, and the importance of trust to progress with technology. Finally, we covered the future of human computer interaction. My takeaway is that futuristic AI we'll continue to fascinate whether we ever get there or not. It is a Janice faced future.

[00:48:52] The proponents of artificial general intelligence are exploring. Will it solve more problems that it creates in reality? It's not a question of when we get there, unless we suddenly find ourselves needing that level of intelligence for an existential survival issue for our race, but how we can. At some point we likely will, but whether it will take 50 or 150 years, I'm less sure about thanks for listening.

[00:49:20] If you'd like to show subscribe@futurize.co or in your preferred podcast player. And rate us with five stars. If you like this topic, you may enjoy other episodes of futurizing such as episode 30, artificial general intelligence episode 51 on the AI for learning episode 16 on perception, AI episode 49 on living the future of work episode 35 on augmented reality episode 47 on scifi tech and episode 31.

[00:49:52] On probiotics. Futurizing preparing you to deal with disruption.

 

Ben GoertzelProfile Photo

Ben Goertzel

CEO, SingularityNET

r. Ben Goertzel is the CEO of the decentralized AI network SingularityNET, a blockchain-based AI platform company, and the Chief Science Advisor of Hanson Robotics, where for several years he led the team developing the AI software for the Sophia robot. Dr. Goertzel also serves as Chairman of the Artificial General Intelligence Society, the OpenCog Foundation, the Decentralized AI Alliance and the futurist nonprofit Humanity+. Dr. Goertzel is one of the world’s foremost experts in Artificial General Intelligence, a subfield of AI oriented toward creating thinking machines with general cognitive capability at the human level and beyond. He also has decades of expertise applying AI to practical problems in areas ranging from natural language processing and data mining to robotics, video gaming, national security and bioinformatics. He has published 20 scientific books and 140+ scientific research papers, and is the main architect and designer of the OpenCog system and associated design for human-level general intelligence.