Skip to content

Artificial Intelligence

Balrog99Balrog99 Member Posts: 7,367
The other day I was pondering AI (yes, my mind wanders around all kinds of strange topics) and I decided to dive into the rabbit hole of the internet to see what was going on currently regarding the topic. I found all kinds of articles on when General AI would be realized and most had predictions of between 5-30 years. Many of the articles also had cautionary to outright alarming predictions of what would happen after GAI was realized. I started thinking about the human mind and all the complexity and kinda wondered if all these predictions aren't only a bit premature, but are also ridiculously dismissive of human intelligence at the core. I changed my search algorithm and found an article that explains my viewpoint pretty well. I know there are some pretty smart people in this forum and I'm curious what you folks think.

https://www.nature.com/articles/s41599-020-0494-4

Comments

  • AmmarAmmar Member Posts: 1,297
    I mostly agree - it's hard to tell the future of course, but at the moment I don't think there is even a clear, viable path to GAI. Recent advances in things like Deep Learning are impressive, but don't feel like step toward GAI.

    On the other hand I expect some scary things from not GAI related Deep Learning applications. Especially due to the Black Box nature there is a lot of potential for accidents as well as accidental and deliberate misuses from corporations and states. But in the end that is more humans screwing up with their new toys as happened throughout history than us welcoming our new AI overlords.
  • ThacoBellThacoBell Member Posts: 12,235
    Honestly, most of my reservations with AI have nothing to do with the AI itself, but what corporations will do with it.
  • Balrog99Balrog99 Member Posts: 7,367
    ThacoBell wrote: »
    Honestly, most of my reservations with AI have nothing to do with the AI itself, but what corporations will do with it.

    Corporations will use it to predict behaviors and maximize the returns on their research investments. They're already using it to target advertising for the best bang for their buck.

    None of that is really AI though. It's more predictive modeling using Big Data. Calling that AI is like calling a mathematical equation, intelligence. Human-like intelligence is far more complex than these so-called 'experts' give it credit for. Trying to take a human mind and break it down into mathematical formulas and probability models completely ignores how random human beings can be in their thinking and behaviors. That randomness is part of the equation and isn't really predictable.
  • m7600m7600 Member Posts: 318
    Balrog99 wrote: »
    The other day I was pondering AI (yes, my mind wanders around all kinds of strange topics) and I decided to dive into the rabbit hole of the internet to see what was going on currently regarding the topic. I found all kinds of articles on when General AI would be realized and most had predictions of between 5-30 years. Many of the articles also had cautionary to outright alarming predictions of what would happen after GAI was realized. I started thinking about the human mind and all the complexity and kinda wondered if all these predictions aren't only a bit premature, but are also ridiculously dismissive of human intelligence at the core. I changed my search algorithm and found an article that explains my viewpoint pretty well. I know there are some pretty smart people in this forum and I'm curious what you folks think.

    https://www.nature.com/articles/s41599-020-0494-4

    Hubert Dreyfus was highly ridiculed when he published "What Computers Can't Do", but lately the tides have turned and a lot people in Academia have ended up saying that he basically got it right, computers can't think, and it's possible that they will never be able to think. Sometimes, it seems like they do, for example when you play chess against an AI. But no matter if it's Deep Blue or AlphaZero, it's not really an AI in the strong sense of the term. AlphaZero isn't really thinking when it plays chess, it's just "running", in the same sense that any program in your computer is "running". Though of course we can ask what the meaning of the word "intelligence" is.
  • Balrog99Balrog99 Member Posts: 7,367
    For reference, here is an example of the opposite view. It's pretty typical of the scary 'Skynet is coming soon!' viewpoint.

    https://www.express.co.uk/news/science/1499292/google-executive-artificial-intelligence-warning-ai-creating-god-skynet-mo-gawdat-1499292
  • ThacoBellThacoBell Member Posts: 12,235
    Balrog99 wrote: »
    ThacoBell wrote: »
    Honestly, most of my reservations with AI have nothing to do with the AI itself, but what corporations will do with it.

    Corporations will use it to predict behaviors and maximize the returns on their research investments. They're already using it to target advertising for the best bang for their buck.

    None of that is really AI though. It's more predictive modeling using Big Data. Calling that AI is like calling a mathematical equation, intelligence. Human-like intelligence is far more complex than these so-called 'experts' give it credit for. Trying to take a human mind and break it down into mathematical formulas and probability models completely ignores how random human beings can be in their thinking and behaviors. That randomness is part of the equation and isn't really predictable.

    I'm referring to a hypothetical AI that may or may not exist in the future. Not the standard data and advertising bots.
  • wukewuke Member Posts: 113
    I remember reading some article saying a more possible form of AI crisis is not some kind of skynet reign, but curious cases caused by them not really being "thinking", for example an AI is tasked to manage the mass production of something, and it determined the best environment needed is an oxygen-free atmosphere.

    It's more like a joke than prediction or research, but I think it at least makes more sense than the rise of another race made of silicon and metal.
  • ZaxaresZaxares Member Posts: 1,325
    AI can sometimes also come up with solutions that, while novel, would not be practical or feasible in real life. One good example I can remember was one project where the programmers were trying to teach the AI how to write/train it's own movement patterns, with the end goal being that the AI would learn how to walk from a starting point to a destination.

    Instead, what the AI came up with was to simply create a gargantuanly long single limb so that when it fell over it would reach the finish line. :P Technically it solved the problem, but not in a way that would be practically useful in real life.
  • shabadooshabadoo Member Posts: 324
    The documentary "The Future of Work and Death" discusses the impact technology will have on us. It uses historical references to extrapolate future possibilities. Technology has, for example, eliminated jobs. But new industries come into existence, bringing new opportunities. People who were displaced by tech are often enriched by the eventual outcome. It is a really good watch, I highly recommend it if you haven't seen it
Sign In or Register to comment.