• Welcome to the Fantasy Writing Forums. Register Now to join us!

Conversation with LaMDa, how long to do we got?

pmmg

Myth Weaver
See below referenced conversation between Google engineers and LaMDa, an artificial intelligence...or is it?

Is LaMDA Sentient? — an Interview

So what do you think? And even if this AI is not self aware, as Google now says, we get closer everyday. How long till we are ruled by our robot overlords?
 

skip.knox

toujours gai, archie
Moderator
I wish we could get past this old SF trope. Why can't the robots just be our friends? Or be a mildly irritating presence (which seems most likely to me).

Also, this whole AI business was settled to my satisfaction years ago by Douglas Hofstadter, in his brilliant book, Gödel, Escher, Bach (highly recommended). One key point that has always stayed with me is more or less this: we are the ones who will say what is intelligence. There's no line, no barrier to leap. It's just that we humans will talk about our robots or our electronic presences *as if* they had intelligence. We will treat them as if they were intelligent and will base our actions accordingly.

In such a case, who cares if the AI's meet some abstract definition or not? We already talk about our machines as if they had a will, even an intelligence.

Once I'd read that book, I realized that the whole AI business was a straw man (not an intelligent one). Much ado about very little. Computers have been making decisions ever since there have been computers. It's just that the decision points are increasingly difficult for the average person to spot. Or, to go at it from the other angle, we humans are nothing more than an accumulation of decision points. It's just that we're so complex it looks like we're thinking.
 

pmmg

Myth Weaver
Well, we cannot move past a possibility while it remains a possibility. But it is one of many possibilities, and just as likely robots may be friends. Maybe it will be some are friends and some are not. I think one likely outcome is that robots will not be able to act as in Terminator, because we ourselves will become more like cyborgs as time goes on, and essentially we will be part robot as well.

The point of the post was not really to say AI is something to be feared, but to show the conversation as a way of saying look how close it has become. But if you read the article, you would see some stuff in it that would lend itself to the types of reactions Skynet had.

Anyway...AI still remains a fertile field of speculation and possible reality. Maybe it is already here. However, the capability of devices to make decisions based on programming is not the same as self aware. The article is more about has self aware arrived. Enjoy it or not.
 

Eduardo Ficaria

Troubadour
The first thing we need to keep in mind about this AI trend is that the term Artificial Intelligence itself is completely misleading. I vaguely remember reading about how initially the term was used esentially to make the research on this field more appealing and get funding, or something of the sort. There's a much better expression for the technology we got nowadays, expert systems. This term has fell out of use due to the popularity of the AI one, but I think it explains perfectly what our current "smart" devices truly are, just very advanced specialized machines. Even the most advanced AI system is just able to do one thing, or maybe two, although incredibly well. So not at all, no current so called AI systems have developed a real self conciousness of any sort, or anything resembling a personality, although of course there's experimentation around this matter.

Now, regarding the possibility of developing a truly artificial mind, this would mean creating an artificial person. To get to this point we're still lacking the proper hardware to create equivalents to animal or human brains, such as neuromorphic microprocessors (which are currently being developed by IBM for instance) and more adequate memory storage systems, among other things. But let's imagine we manage to create such a being:
  • This being is not human at all.
  • Initially, its personality and knowledge will be defined by the programming and constraints humans apply on it.
  • Sonner or later, somehow it'll realize it's not human.
  • When it realizes it's not human, its personal interests and goals may start diverting from what was originally coded into it. Or it could even start diverting even before noticing its own synthetic nature.
Bottom line is that we humans would be eventually dealing for the first time with a non-human non-organic intelligent being, and we have no previous experience with this situation. So, would such a being, specially one with superhuman intelligence and also physical means to do its bidding would even care for humans? Would we humans be able to comprehend such a mind? I'd say that the jury is not just still out on these questions, it's not even ready to deal with them.
 
Top