Marcus’s Substack
Subscribe
Sign in
Home
Archive
About
The Fatal Flaws of the Prevailing AI Safety Paradigm
“Interpretability”, “alignment”, and “superalignment” are buzzwords, not feasible safety goals for large language models
Dec 19
•
Marcus Arvan
Share this post
Marcus’s Substack
The Fatal Flaws of the Prevailing AI Safety Paradigm
Copy link
Facebook
Email
Notes
More
March 2023
No one knows how to reliably test for AI safety
Testing AI safety is not just 'hard to do' - it's currently infeasible
Mar 28, 2023
•
Marcus Arvan
4
Share this post
Marcus’s Substack
No one knows how to reliably test for AI safety
Copy link
Facebook
Email
Notes
More
6
Big Tech’s Unethical AI Experiment on Humanity
By Marcus Arvan
Mar 28, 2023
•
Marcus Arvan
2
Share this post
Marcus’s Substack
Big Tech’s Unethical AI Experiment on Humanity
Copy link
Facebook
Email
Notes
More
1
February 2023
Are AI developers playing with fire?
Do developers have any idea what AI are really learning, or how to control them?
Feb 17, 2023
•
Marcus Arvan
6
Share this post
Marcus’s Substack
Are AI developers playing with fire?
Copy link
Facebook
Email
Notes
More
8
Coming soon
This is Marcus’s Substack.
Feb 17, 2023
•
Marcus Arvan
Share this post
Marcus’s Substack
Coming soon
Copy link
Facebook
Email
Notes
More
Share
Copy link
Facebook
Email
Notes
More
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts