Best / Podcasts

#159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less

Load Podcast Series
<p>In July, OpenAI announced a new team and project: Superalignment. The goal is to figure out how to make superintelligent AI systems aligned and safe to use within four years, and the lab is putting a massive 20% of its computational resources behind the effort.</p><p>Today's guest, Jan Leike, is Head of Alignment at OpenAI and will be co-leading the project. As OpenAI puts it, "...the vast power of superintelligence could be very dangerous, and lead to the disempowerment of humanity or even human extinction. ... Currently, we don't have a solution for steering or controlling a potentially superintelligent...
loading episide podcastEpisode15573026930

0 Comments

#key podcastEpisode15573026930