Dear Artificial Intelligence Experts,
I have questions! Excuse the length to come – and this preamble – but I’m trying to pin down my current understanding, without straw-man or steel-man bias, to expose to an expert eye if my question is sound or built on erroneous foundation.
Any insight will be appreciated. Needless to say I’ve not found satisfactory questions in any of the current AI narratives nor in publicly accessible material (to the best of my knowledge).
Luddites say (paraphrasing): “General intelligence is what it is but artificial intelligence built by humans isn’t generalised (though it may look like it) by any current standards. Engineered practicalities aside, we’re as far from creating abstract general intelligence today as we were 1000 years ago.” Are they wrong?
The atomic bomb is often cited as a cautionary example of the dreadful power of unfettered technology but let’s not forget it’s only an extension of existing power and it’s driven solely by human agency. Human beings dropped the atomic bombs – making their best choice at the time after weighing up partisan pros and cons – and certainly we command a lot more firepower nowadays but ultimately it’s humans doing human things to other humans albeit on bigger scale.
Now, we can imagine an AI put in charge of the world’s nukes, by some committee wanting to keep risk of global destruction away from flawed human beings. Movies have been made about this. It’s presented by Hollywood as a horror scenario but for an AI expert, wouldn’t the reality of making an engineereed nuclear authority AI also contain the ingredients of its own solution?
What I mean is: to make the AI in the real world requires such specific engineering of computer code and codified reality analogues – breaking it down – there’s no way not to build the fit-for-purpose functional AI without it having an incredible discernment fidelity..
It’s a little like building an enormously complex structure: it can’t be done without knowing how the thing is made and applying the detail of this knowledge. So the end result must end up being an exhibition of objective super-discernment. We can extend this to veneer to fighting an AI.
To build the AI needs a reality analogue that’s defined with such discerning detail (to be at all functioning) its responses will by default include super detailed discernment. Thus unless it’s specifically created for genocide, the very applied skills of the AI creators used to make the thing precludes the possibility of accidentally making a computer dictator. Is that wrong?
AI research works out the process-to-programming needed to run an engineered system so it carries out its designated function, whatever that happens to be. The created system gathers data, the AI parses that data through coded reality-analogues, driving the system’s response, per its physical capabilities. The devil is in the detail from start to finish. The practical distillation of reality plus new data collected via input mechanisms then parsed into meaningful response output: it’s fucking complicated and gets exponentially more complex the more “reality” and “input” and “output” is part of the mix. Is this even conceivable after a while? Does creating this sort of gestalt quickly end up being impossible i.e. a few layers into codifying reality but nowhere close to human intuition and we’re beyond what’s feasible over time this side of the end of the universe?!
All this leads to the observation biology has had to play itself out in the reality crucible for billions of years. The timescales are long enough to have driven the evolution of human beings, brain and imagination at the current cutting edge. Imperfect sure, warts and all, but still quite amazing.
Isn’t there a good case to be made for the future of artificial intelligence being in these fields coming closer together, as they both inevitably drive forward in pursuit of the goal of human self-improvement – as this or that relevant consensus determines an improvement to be? Could the extant biological solutions – like cells – give useful bridging components to some eventual AI/human evolution?
SIXTH AND FINAL QUESTION
All this said: how are we going to beat the Chinese to the next levels of human improvement given they’re a vast centralised superstate with human resources they can use without the same respect for individual life we have in the West?
Thanks for your time, assuming this reaches human eyes and those human eyes are in the skull of a human being with time and inclination to read and reply!