Artificial Intelligence

Out of context: Reply #1048

  • Started
  • Last post
  • 2,583 Responses
  • yuekit2

    "Artificial intelligence doesn't exist"

    • Hour+ video all about semantics.kalkal
    • I think it's more about how people confuse machine learning with "real AI" and the media perpetuates that view.yuekit
    • Na, that stuff she listed is AI, what she's referring to (in the context of saying what ML/DL is not) is AGI and conflating it with AI. Having said that -kalkal
    • I agree that the media does do the same.kalkal
    • Sure I guess Siri could be considered AI depending on how you define it. But in terms of actual "intelligence" it seems like there would be a step below AGIyuekit
    • where it hasn't surpassed humans but can still learn, understand and reason to some degree. Many people seem to think ChatGPT is like this.yuekit
    • Sure, it's exhibiting some properties that we might expect from AGI but not to the full extent, nor all of them.kalkal
    • The argument being made by more skeptical people is that it's nowhere close and the whole thing is a fraud basically.yuekit
    • i.e. GPT only presents the illusion of being a proto-AGI leading to all the media hype (and possibly bad decisions by companies).yuekit
    • It's an interesting debate and I'm still trying to decide where I come down on this. This podcast gives a more in-depth view.
      https://www.buzzspro…
      yuekit
    • Did you see OpenAI's best test for consciousness?kalkal
    • No but funny thing is if you ask ChatGPT itself it will deny pretty strongly that it's conscious or intelligent.yuekit
    • You can also trick GPT into making it recite its testing data.
      https://not-just-mem…
      yuekit
    • Their test didn't work for me (OpenAI may have already patched it) but this did
      https://twitter.com/…
      yuekit
    • I think that's a peak behind the curtain of how it's just an elaborate probabilistic text synthesis machine. But that doesn't mean these systems aren't useful.yuekit
    • The test is, gather dataset as normal, train model as a control. Then remove any reference to consciousness and the human experience from the dataset.kalkal
    • Has to be fine tooth comb situation, not a single reference of hint as to what consciousness is actually like.kalkal
    • You then prompt this new model that should have no idea what consciousness is, you provide it a definitionkalkal
    • If it's like "I've never heard of that before but I know exactly what you mean", chances are, it is in fact conscious.kalkal

View thread