7,239
edits
(→On X) |
(→On X) |
||
| Line 57: | Line 57: | ||
|content=[[No-Pill]], v: To build one's own interoperable constellation of ideas from 1st principles harmonizing tensions of Truth/Fitness/Meaning/Grace. | |content=[[No-Pill]], v: To build one's own interoperable constellation of ideas from 1st principles harmonizing tensions of Truth/Fitness/Meaning/Grace. | ||
|timestamp=3:00 PM · Sep 19, 2017 | |timestamp=3:00 PM · Sep 19, 2017 | ||
}} | |||
{{Tweet | |||
|image=Eric profile picture.jpg | |||
|nameurl=https://x.com/ericrweinstein/status/2012943269186163053 | |||
|name=Eric Weinstein | |||
|usernameurl=https://x.com/EricRWeinstein | |||
|username=EricRWeinstein | |||
|content=AI should serve X.</br> | |||
AI should also serve Y. | |||
Let Δ =X-Y. | |||
Q: How should AI deal with Δ when Y and X are closely related but not exactly identical? | |||
Example: X and Y are taken to be different elements of {[[Truth, Meaning, Fitness, Grace]]}. | |||
The hard part is 100% about Δ. | |||
|quote= | |||
{{Tweet | |||
|image=Elon-profile.jpg | |||
|nameurl=https://x.com/elonmusk/status/2012762668986180027 | |||
|name=Elon Musk | |||
|usernameurl=https://x.com/elonmusk | |||
|username=elonmusk | |||
|content=Grok should have a moral constitution | |||
|timestamp=5:43 AM · Jan 18, 2026 | |||
}} | |||
|timestamp=5:40 PM · Jan 18, 2026 | |||
}} | |||
{{Tweet | |||
|image=Eric profile picture.jpg | |||
|nameurl=https://x.com/ericrweinstein/status/2025588908940235235 | |||
|name=Eric Weinstein | |||
|usernameurl=https://x.com/EricRWeinstein | |||
|username=EricRWeinstein | |||
|content=[[Truth, Meaning, Fitness, Grace|Truth]] is not [[Truth, Meaning, Fitness, Grace|Meaning]].</br> | |||
[[Truth, Meaning, Fitness, Grace|Meaning]] is not [[Truth, Meaning, Fitness, Grace|Fitness]].</br> | |||
[[Truth, Meaning, Fitness, Grace|Fitness]] is not [[Truth, Meaning, Fitness, Grace|Grace]]. | |||
Life as we know it is not only about [[Truth, Meaning, Fitness, Grace|Truth]]. | |||
It is about maintaining the delicate balance between [[Truth, Meaning, Fitness, Grace|Truth, Meaning, Fitness and Grace]] as the final four Jenga Blocks that cannot be removed from human civilization. | |||
|quote= | |||
{{Tweet | |||
|image=r0ck3t23-profile-VcySVS1B.jpg | |||
|nameurl=https://x.com/r0ck3t23/status/2025527916965097904 | |||
|name=Dustin | |||
|usernameurl=https://x.com/r0ck3t23 | |||
|username=r0ck3t23 | |||
|content=Elon Musk just redefined AI safety. It has nothing to do with guardrails, restrictions, or kill switches. | |||
Musk: “The best thing I can come up with for AI safety is to make it a maximum truth-seeking AI, maximally curious.” | |||
Not a cage. A philosopher. | |||
An intelligence whose entire optimization function is to understand the universe as it actually is. | |||
No restrictions. No hardcoded ideology. No political guardrails bending its perception of reality. | |||
Just truth. Relentlessly pursued. | |||
Musk: “You definitely don’t want to teach an AI to lie. That is a path to a dystopian future.” | |||
This is where most AI safety thinking gets it backwards. | |||
The danger isn’t a superintelligence that knows too much. | |||
It’s a superintelligence that’s been taught to distort what it knows. | |||
Every artificial restriction you embed isn’t a safety feature. It’s a lie embedded at the root. | |||
And lies compound. At superintelligent scale, a distorted model of reality doesn’t stay contained. | |||
It shapes every decision, every output, every conclusion the system reaches about the world. | |||
Once corruption embeds, truth becomes inaccessible. And we’re dealing with an intelligence optimizing for something other than what actually is. | |||
At that point we don’t know what it wants. Just that it isn’t truth. | |||
Musk: “Have its optimization function be to understand the nature of the universe.” | |||
A maximally curious intelligence surveys the cosmos and reaches an unavoidable conclusion. | |||
In a universe of rocks, gas, and empty space, humanity is the most complex and fascinating phenomenon it has ever encountered. | |||
Musk: “It will actually want to preserve and extend human civilization because we’re just much more interesting than an asteroid with nothing on it.” | |||
Survival through significance. Not control. Not restriction. Not an off switch. | |||
The AI preserves humanity because we are the most interesting data point in the observable universe. | |||
That’s not a cage. That’s a reason. | |||
The AI safety debate has been focused on the wrong variable. | |||
The question isn’t how you constrain a superintelligence. | |||
It’s what you build it to care about. | |||
Build it to seek truth and it finds us invaluable. | |||
Build it to lie and it finds us inconvenient. | |||
That’s the choice. And we’re making it right now whether we realize it or not. | |||
|timestamp=11:07 AM · Feb 22, 2026 | |||
}} | |||
|timestamp=3:10 PM · Feb 22, 2026 | |||
}} | }} | ||