The problem with AI alignment is that humans aren't aligned
I'm sure there are some AI peeps here. Neural networks scale with size because the number of combinations of parameter values that work for a given task scales exponentially (or, even better, factorially if that's a word???) with the network size. How can such a network be properly aligned when even humans, the most advanced natural neural nets, are not aligned? What can we realistically hope for?
Here's what I mean by alignment:
Ability to specify a loss function that humanity wants
Some strict or statistical guarantees on the deviation from that loss function as well as potentially unaccounted side effects
Pal, I want of whatever you smoked prior to writing this
Now seriously, from the way you wrote the post, I believe that you might not have had hands-on experience with deep learning techniques and may very well have just watched a handful of videos on YouTube instead