June 2, 2024

#431 – Roman Yampolskiy: Dangers of Superintelligent AI

The player is loading ...
#431 – Roman Yampolskiy: Dangers of Superintelligent AI

Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com

Apple Podcasts podcast player iconSpotify podcast player iconRSS Feed podcast player icon
Apple Podcasts podcast player iconSpotify podcast player iconRSS Feed podcast player icon

Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors:
Yahoo Finance:https://yahoofinance.com
MasterClass:https://masterclass.com/lexpodto get 15% off
NetSuite:http://netsuite.com/lexto get free product tour
LMNT:https://drinkLMNT.com/lexto get free sample pack
Eight Sleep:https://eightsleep.com/lexto get $350 off

Transcript:https://lexfridman.com/roman-yampolskiy-transcript

EPISODE LINKS:
Roman’s X:https://twitter.com/romanyam
Roman’s Website:http://cecs.louisville.edu/ry
Roman’s AI book:https://amzn.to/4aFZuPb

PODCAST INFO:
Podcast website:https://lexfridman.com/podcast
Apple Podcasts:https://apple.co/2lwqZIr
Spotify:https://spoti.fi/2nEwCF8
RSS:https://lexfridman.com/feed/podcast/
YouTube Full Episodes:https://youtube.com/lexfridman
YouTube Clips:https://youtube.com/lexclips

SUPPORT & CONNECT:
– Check out the sponsors above, it’s the best way to support this podcast
– Support on Patreon:https://www.patreon.com/lexfridman
– Twitter:https://twitter.com/lexfridman
– Instagram:https://www.instagram.com/lexfridman
– LinkedIn:https://www.linkedin.com/in/lexfridman
– Facebook:https://www.facebook.com/lexfridman
– Medium:https://medium.com/@lexfridman

OUTLINE:
Here’s the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time.
(00:00) – Introduction
(09:12) – Existential risk of AGI
(15:25) – Ikigai risk
(23:37) – Suffering risk
(27:12) – Timeline to AGI
(31:44) – AGI turing test
(37:06) – Yann LeCun and open source AI
(49:58) – AI control
(52:26) – Social engineering
(54:59) – Fearmongering
(1:04:49) – AI deception
(1:11:23) – Verification
(1:18:22) – Self-improving AI
(1:30:34) – Pausing AI development
(1:36:51) – AI Safety
(1:46:35) – Current AI
(1:51:58) – Simulation
(1:59:16) – Aliens
(2:00:50) – Human mind
(2:07:10) – Neuralink
(2:16:15) – Hope for the future
(2:20:11) – Meaning of life