https://youtu.be/iVkUCTEauSc
Our documentaries:
https://www.youtube.com/watch?v=7_j9KUNEvXY&t=
https://www.youtube.com/watch?v=BzAMjU14z4w&t=
https://www.youtube.com/watch?v=ntnynBDbmaQ&t=
The latest version of ChatGPT is ....“overdue, overhyped, and underwhelming”
That comes from a new New Yorker piece by Cal Newport, an MIT-trained computer science professor who’s established himself as the defining voice of digital minimalism, a philosophy for navigating a world in which technology easily turns from a tool into a tyrant. Cal has proven prescient on what we’re seeing with artificial intelligence: a burst of marvelous innovation followed by a sudden slow down.
What’s going on? Has ChatGPT hit a wall? What’s the normie-accessible way to understand the limits of large language models? Are the job disruptions of AI overstated? And what does he mean that Silicon Valley has “gone crazy”?
0:00 -- Introduction
1:14 -- ChatGPT Diminishing Returns
3:19 -- AI Technical History
15:57 -- Language Models
21:41 -- Lack of Understanding
24:40 -- Timeline of AGI
33:34 -- Components of AGI
36:03 -- Future of AGI
40:09 -- AI in Education
43:39 -- Job Disruption
53:30 -- AI Realism
1:02:07 -- Outro
📝 Transcript
Transcribing... This may take a few minutes.
Top Comments
@andrewfriedrichs9340
My child just went from crawling to walking in 2 months. Two more months he will be flying. After that he will be Superman. Give me billions now so I can train him for good.
1048 likes
@Ymirheim
I just put some dough into the oven to bake and in just minutes it turned into a loaf so I'm gonna wait for a few hours until it is big enough to feed all of humanity for the rest of time.
137 likes
@waveofbubble2194
the most anxiety reducing podcast i have seen in the past year
17 likes
@PaulOrtiz
Look at how much technology (and power) is required to do the simplest tasks, whether it’s word guessing or a self driving car, or a robot walking up some steps. All the compute power needed, big heavy batteries, sensors, cooling, data centres.
Meanwhile I, a squishy human with two eyes and ears and a brain, can wake up, have a glass of juice and a banana and jump straight to work, or driving, or writing a piece of music.
I feel like the hardware is the limiting factor right now. Our brains aren’t like classical computers. And we process so many more pieces of information, connected to all of our senses. Every hair that senses a breeze, our inner ear, our sense of touch, our eyes making a thousand micro-movements every few seconds. I don’t know what number you could put on all of that raw data that your brain processes and uses to build a model of the world around you but I feel like it’s a LOT. And we do it all without needing a nuclear power station or an entire towns worth of drinking water for cooling. So maybe the real advances will come with some new “medium” or domain on which to simulate the intelligence. Right now it feels like we’re tossing billions of dollars into a black hole trying to make GPUs work like the literal galaxy of interconnected neurons in our brain. And all without the rich, vast sensory input we have as living, breathing beings.
48 likes
@temprd
Correction: programmers are being disrupted… by the illusion of increased personal productivity.
68 likes
@DrGeorgeAntonios
Both interviewer and interviewee are excellent. Thank you.
25 likes
@motionthings
"LLM's are nothing more than autocomplete on steroids" - Linus Thorvalds (The inventor of Linux)
275 likes
@kimzussman4233
Dr Newport is excellent, as are his hints about an AI related stock market bubble...
32 likes
@zzip0
So basically all this craziness is based on empirical observations of past performance and trying to extrapolate. The empirical observations were used to build a very simple predictive model for future performance. They tried to guess the model as exponential, as it looked based on very few past data points. And they probably guessed wrong.
There is no fundamental theory, there is no brain involved. The assumption was just - bigger will be exponentially better.
They thought they found a new Moore-like law, which is also purely empirical.
I am seriously losing confidence in human intelligence.
134 likes
@LoudDesperation
So am I safe retraining as an electrician and still have work for 10 more years?
42 likes
@martinjohnson5498
So, they were at the start of an ogive curve and assumed (with no evidence and against logic) that’s they were on a parabolic or even hyperbolic curve? And we are supposed to think they are geniuses?
14 likes
@billyb6001
I like how something went from amazing world changing and ground breaking to underwhelming in like a year.
10 likes
@unterdemasphalt
tldr: LLM's are good for certain specific cases and we have no path to actual AGI. Don't panic.
9 likes
@moffattF
Is it even possible to turn a Camry into a Ferrari when the only fuel available is contaminated?😅
8 likes
@kingofthebungle8612
This is a great interview. A lot of this is just stuff I have intuited as a user since the beginning, but hearing the reasoning behind why it is that way makes it make sense. Also, the history was interesting because I could never quite tell if guys like Sam Altman actually believed this tech was so good or it was all for show. I suppose now that he must've thought so in the beginning, but since the curve broke he is probably just trying to salvage everything.