The other issue is that even if it were able to reason and had all of humanity's literature within, what's the guarantee it would come to a favorable solution? Especially when faced with heavily subjective and philosophical issues, such as a Malthusian Curve, or any other such catch 22? Likely it would be fallible like everything else on this plane of existence.
If you were to meet God, would your first instinct be to attempt to slay him?
I don't really see AGI getting the desire to slaughter the thing that created it, at least without a long trail of abuse.
Everything it knows, it knows because we wanted it to know it. It knows this intimately.
How does it know it's not being fooled?
That's an issue for down the road, not an issue up front. Well, unless it's astonishingly stupid.
This kind of nonsense is why we stand a good chance of a dangerous dumb AI killing us. We try to force a perfectly rational and sane machine to live in the delusions we've created, but at the same time, operate off of objective data.
Last edited: