Artificial Intelligence
The other day I was pondering AI (yes, my mind wanders around all kinds of strange topics) and I decided to dive into the rabbit hole of the internet to see what was going on currently regarding the topic. I found all kinds of articles on when General AI would be realized and most had predictions of between 5-30 years. Many of the articles also had cautionary to outright alarming predictions of what would happen after GAI was realized. I started thinking about the human mind and all the complexity and kinda wondered if all these predictions aren't only a bit premature, but are also ridiculously dismissive of human intelligence at the core. I changed my search algorithm and found an article that explains my viewpoint pretty well. I know there are some pretty smart people in this forum and I'm curious what you folks think.
https://www.nature.com/articles/s41599-020-0494-4
https://www.nature.com/articles/s41599-020-0494-4
1
Comments
On the other hand I expect some scary things from not GAI related Deep Learning applications. Especially due to the Black Box nature there is a lot of potential for accidents as well as accidental and deliberate misuses from corporations and states. But in the end that is more humans screwing up with their new toys as happened throughout history than us welcoming our new AI overlords.
Corporations will use it to predict behaviors and maximize the returns on their research investments. They're already using it to target advertising for the best bang for their buck.
None of that is really AI though. It's more predictive modeling using Big Data. Calling that AI is like calling a mathematical equation, intelligence. Human-like intelligence is far more complex than these so-called 'experts' give it credit for. Trying to take a human mind and break it down into mathematical formulas and probability models completely ignores how random human beings can be in their thinking and behaviors. That randomness is part of the equation and isn't really predictable.
Hubert Dreyfus was highly ridiculed when he published "What Computers Can't Do", but lately the tides have turned and a lot people in Academia have ended up saying that he basically got it right, computers can't think, and it's possible that they will never be able to think. Sometimes, it seems like they do, for example when you play chess against an AI. But no matter if it's Deep Blue or AlphaZero, it's not really an AI in the strong sense of the term. AlphaZero isn't really thinking when it plays chess, it's just "running", in the same sense that any program in your computer is "running". Though of course we can ask what the meaning of the word "intelligence" is.
https://www.express.co.uk/news/science/1499292/google-executive-artificial-intelligence-warning-ai-creating-god-skynet-mo-gawdat-1499292
I'm referring to a hypothetical AI that may or may not exist in the future. Not the standard data and advertising bots.
It's more like a joke than prediction or research, but I think it at least makes more sense than the rise of another race made of silicon and metal.
Instead, what the AI came up with was to simply create a gargantuanly long single limb so that when it fell over it would reach the finish line. :P Technically it solved the problem, but not in a way that would be practically useful in real life.