• Welcome to the Fantasy Writing Forums. Register Now to join us!

AI Trends

Ban

Troglodytic Trouvère
Article Team
I was talking now as much as any hypothetical, showing studies where people are developing emotional attachments to machines NOW. It will only get worse. Referencing Data was just making a smart-assed point, his "death" if even death it could be called, wouldn't mean much of anything except the destruction of a really useful tool to me.

On the flip side of this, for me, I admit to great sadness on the death of animals... more so than with most humans for some reason. Illogical, but true.

And Devor's point is solid, bioengineering is likely even scarier. But hey! Dystopian fears are a powerful magnet for human brains.
Sure that's fair. If we're returning to real-life machinery there is of course no reason to view it as a living, conscious being as it quite simply is not. I found the case of Data more compelling to discuss as there is no meaningful discussion to be had on real-world "AI" in this regard in my opinion (there is a reason I keep putting "AI" between quotes). As for your example of animals, I don't see the lack of logic there. It would be even more odd to believe that they lack our capacity to feel. Faking outward emotion without the interior processes would be energy-expensive theater.

Anyways, I'm veering way off topic there. Back to ChatGPT, it is as conscious as my coffee mug I reckon.
 
Last edited:

Devor

Fiery Keeper of the Hat
Moderator
Ban, I understand enough about how things work nowadays that it really dispels the mystery and the handwavium that's required of the hypothetical. You're asking "what if?" but in a direction that the technology isn't really heading. It's not doing it for me. It feels like saying "What if a cloud in the sky had the capacity of a human brain?" Like, it doesn't. I know it doesn't. You know it doesn't. It's a nonsense question. "But no, seriously, what if?" Well gee, I'd really need to look at the evidence to even think about answering that.

If I built a perfect replica of a human brain out of silicone and metal...

You understand that you cannot build a perfect replica of something using fundamentally different materials, yeah?
 

Ban

Troglodytic Trouvère
Article Team
Ban, I understand enough about how things work nowadays that it really dispels the mystery and the handwavium that's required of the hypothetical. You're asking "what if?" but in a direction that the technology isn't really heading. It's not doing it for me. It feels like saying "What if a cloud in the sky had the capacity of a human brain?" Like, it doesn't. I know it doesn't. You know it doesn't. It's a nonsense question. "But no, seriously, what if?" Well gee, I'd really need to look at the evidence to even think about answering that.

You understand that you cannot build a perfect replica of something using fundamentally different materials, yeah?
If you don't want to engage with a question based on a fictional hypothetical, you shouldn't engage with a question based on a fictional hypothetical. You do that off your own volition and now feel the need to be upset about it. I find this rather rude when the comment you are responding to includes the following quote:
I don't have to personally believe that an actual humanoid AGI can exist or will exist to engage with the hypothetical. This is a SFF forum after all.
 

pmmg

Myth Weaver
I was talking now as much as any hypothetical, showing studies where people are developing emotional attachments to machines NOW. It will only get worse. Referencing Data was just making a smart-assed point, his "death" if even death it could be called, wouldn't mean much of anything except the destruction of a really useful tool to me.

What if, when you go, Data finds it means nothing, just the destruction of a somewhat useful tool?

The whole question of the future of AI and Technology does start to become 'what does it mean to be human' as we go further down the road. I am not sure this matters to the topic raised, but its coming. Course, its also likely that as we go that direction, we ourselves start to have more 'enhancements' and become more cybernetic. When we are Human with cyborg parts, and they are robots with human parts....its going to be hard to answer this question.
 
Last edited:

Devor

Fiery Keeper of the Hat
Moderator
If you don't want to engage with a question based on a fictional hypothetical, you shouldn't engage with a question based on a fictional hypothetical. You do that off your own volition and now feel the need to be upset about it. I find this rather rude when the comment you are responding to includes the following quote:

I don't have to personally believe that an actual humanoid AGI can exist or will exist to engage with the hypothetical. This is a SFF forum after all.

I'm not sure why I come across as upset. You kind of tried to rip apart my earlier post though.

You're asking about a hypothetical, but this is a thread about real world AI trends. It's only natural that I'd start by addressing those before moving into the hypothetical.

And the thing about the hypothetical is, there is nothing about Data in the The Next Generation series to make the case that he is alive (maybe they retconned an explanation in Picard, I don't know). He is a machine, and we know how machines operate, and life just isn't part of it. Even the explanation provided in the series suggests that he is simulating life, not actually alive. But they were always writing with flimsy science in the first place, so it's not a very good example.

BUT, if you want a better hypothetical, I'll give you two of them:

The first is Jarvis, in Iron Man and the Avengers. Jarvis is an AI that started out as a brain scan of a real person that was adapted with additional software and hardware capabilities. Is it alive, or is it simulating life?

In the third season of the Sword Art Online anime, we're introduced to something called an "Artificial Fluctlight." In the anime, they've identified "the soul" as an individual cluster of connectivity in the brain. Then they somehow replicated just that spot thousands of times and interfaced these life forms into an artificial world, like players in a video game, or better yet, people in the Matrix, using software to round out their brains.

That second example makes a much stronger case for being artificial life IMO than either Data or Jarvis. I tend to think of Jarvis as the brain without the soul, and the Artificial Fluctlight as a soul without a brain.

We're ultimately not there yet. There probably is not an artificial fluctlight, and for me, I've never seen an explanation for a true AI that seemed remotely plausible. I did see one person make the case that life is a state of matter that's being affected in some of those invisible dimensions that we think exist only because weird mathematics seem to prove they do. I didn't hear any evidence for that, but it sounded to me as the most likely hypothesis, and I'm not aware of any scifi that tries to use it.
 

Ban

Troglodytic Trouvère
Article Team
Data is the example that started this tangent so I used it. I'm not a Star Trek viewer and I don't particularly care for the details. It was a simple and very clearly fictional example to make my simple, hypothetical point on.

No I did not "rip apart" your comments. I addressed your apparent and out-of-the-blue belief that I believe ChatGPT is equivalent to a fictional AGI. That is not, as you say, "natural" given the clear communication around discussing the fictional entity that is Data.

The lines "Like, it doesn't. I know it doesn't. You know it doesn't. It's a nonsense question. "But no, seriously, what if?" Well gee, I'd really need to look at the evidence to even think about answering that." and "You understand that you cannot build a perfect replica of something using fundamentally different materials, yeah?" come off as being upset and are quite clearly emotionally charged. How else would one read such a line of questioning wherein my questioning is described as "nonsense" my reasoning as invalid ("you know!") and the word "yeah" is thrown in to repeat your question of whether or not I "understand." "Well gee," I wasn't born yesterday Devor.

Let's move on, this has run its course.
 
No, you wouldn't build an AI to write a novel that tries to compete with other novels. But you'd build an AI in an attempt to compete with the novel writing industry as a whole. How many people talk about writing the story they want to read. Well with a simple subscription fee you'll be able to do exactly that, without the work of writing it.
ChatGPT can already do that, as evidenced by the millions of "how to write a novel with ChatGPT" Youtube videos out there. Thing is, these people just want that novel. They don't care about the quality of it or the writing. Which makes the current iteration of AI good enough for that. Which also means that, from an author point of view, the only issue with it is that it makes natural discoverability harder. And if you're famous, it might get people to publish crap AI novels in your name. But it means that to "beat" the AI, you just have to write decent novels. Nothing nobleprize winning or anything like that.
 

Devor

Fiery Keeper of the Hat
Moderator
ChatGPT can already do that, as evidenced by the millions of "how to write a novel with ChatGPT" Youtube videos out there. Thing is, these people just want that novel. They don't care about the quality of it or the writing. Which makes the current iteration of AI good enough for that. Which also means that, from an author point of view, the only issue with it is that it makes natural discoverability harder. And if you're famous, it might get people to publish crap AI novels in your name. But it means that to "beat" the AI, you just have to write decent novels. Nothing nobleprize winning or anything like that.

So what's happening now is the ChatGPT getting licensed to different companies who tune it for specific functions: Therapy, teaching, dating, video games, and so on. Khan Academy for example is tuning it for use education, upping its math capabilities (people said it was bad at word problems? Not this version) and providing it direction on when and how to provide answers vs. other types of feedback. People were saying that AI had no place in academics until Khan Academy showed everyone what they were up to. Their video led to AI policies getting updated all over the place.

Novel writing is going to take longer, but people are definitely working on it. It might look a bit different: They might want you to fill out a lengthy form instead of a two sentence prompt, for example. But they don't need to sell individual books to make money tuning ChatGPT's novel writing abilities - tuning the AI isn't that expensive and there's plenty of opportunity, so there's reason for them to develop it.
 

Devor

Fiery Keeper of the Hat
Moderator
Let's move on, this has run its course.

I almost let it drop, but the last thing I want is lingering bad blood.

I addressed your apparent and out-of-the-blue belief that I believe ChatGPT is equivalent to a fictional AGI.

But that's really not what I was trying to do.... here's the quote with slightly changed paragraphing and an emphasis on where I actually quoted you again when I started responding to you.

Does it walk like a duck and quack like a duck, though?

ChatGPT is very impressive, but the root of how it's made is fairly simple, and not at all intelligent. It's basically guessing its way through every sentence.

But even in a hypothetical situation where we've created something "to function akin to a human mind," well, you're asking us to assume that the machine operates in some function that machines don't operate in. It's like an oxy moron. If it's somehow alive it's not a machine. And if it is, I'd really have to know how it works to be able to answer the hypothetical.

I can see how it might have looked that way a little because the first two lines were in one paragraph, but that's honestly not the way I was thinking. All three paragraphs were originally one paragraph because they were all part of the same thought, but I made it two paragraphs at the last minute.

The lines "Like, it doesn't. I know it doesn't. You know it doesn't. It's a nonsense question. "But no, seriously, what if?" Well gee, I'd really need to look at the evidence to even think about answering that." and "You understand that you cannot build a perfect replica of something using fundamentally different materials, yeah?" come off as being upset and are quite clearly emotionally charged. How else would one read such a line of questioning wherein my questioning is described as "nonsense" my reasoning as invalid ("you know!") and the word "yeah" is thrown in to repeat your question of whether or not I "understand." "Well gee," I wasn't born yesterday Devor.

🤔 So I don't know. I think I messed up. Most of those lines were about my own substitute hypothetical, not yours. In my mind there was distance between those comments and you, but you clearly didn't feel that way. Worse, I was getting annoyed, so I let it run into that last sentence, the "you understand/yeah?", which was afterwards. I used it as an excuse to let loose too much. I'm sorry.
 

Ban

Troglodytic Trouvère
Article Team
That's alright Devor, misunderstandings or a bad mood can happen to the best of us.
 

Devor

Fiery Keeper of the Hat
Moderator
So the AI-published books are getting dangerous. Apparently there were three AI-generated books about foraging mushrooms that were full of errors.

That's not surprising, but it could herald a problem: Sooner or later Amazon will have to clean up their book market. And that's going to create hurdles for self-publishers.

I would expect that within a few years self-publishers are going to pay to have their books reviewed in some way before publishing.

Am I right? Wrong? What do you guys think?
 

Ban

Troglodytic Trouvère
Article Team
I think that's a good assessment in regards to non-fiction, though I doubt it would translate to a change in policy regarding fiction. As seen in your example, there's a simple case to be made that "informative" books riddled with errors can cause real-world harm, but besides great annoyance on the part of readers there's no such danger to be found in a fiction story. It doesn't much matter if elves have pointy ears in one paragraph and rounded ears in the next. That being said, there's a part of me that hopes that stricter rules regarding the self-publishing of non-fiction will indeed translate to stricter rules regarding fiction self-publishing. Amazon could use a bit of quality control, as currently they just check whether the formatting works and leave the content of the work to speak for itself. I doubt this will be its catalyst though (again, specifically regarding fiction).
 

Devor

Fiery Keeper of the Hat
Moderator
While that's true, about the need only applying to non-fiction, I suspect they'll apply their policies across the board. Fiction publishers have posted about being overwhelmed with AI submissions, and the same is happening at Amazon. I think the sheer volume and poor quality of these AI submissions, plus the opportunity to make money charging for some kind of gatekeeper services, will be enough motivation for that.
 

Ban

Troglodytic Trouvère
Article Team
On the flipside, Amazon is not the only online retailer which sells self-published books. I'm not sold that they would voluntarily give up their dominant lead to the likes of Barnes and Noble. Nevertheless it is just as much possible that the top retailers will come to an agreement about how to deal with this issue, and will all institute such a paywall. Personally I'm not too worried about a potential curtailing of self-publishing. It might prove to be a good way of not only limiting AI submissions, but also raising the prestige of self-publishing a little.
 

skip.knox

toujours gai, archie
Moderator
It strikes me that one of the chief criticisms to be leveled at AI-generated material concerns not the AI but rather the humans who cause the material to be generated in the first place, followed closely by the humans who fail to edit, review, and otherwise act responsibly toward the material prior (and after!) publication. As ever, the machine isn't really the problem. It's operator error.
 
Incorrect non-fiction books are not the sole domain of AI novels though. I could just as easily write one which would be equally incorrect. Perhaps the only difference is that because writing a book yourself is a lot of work only people interested in some subjects will undertake the task of doing so. Which means at least some of it will be correct.

And I don't think Amazon will do anything about it in the short term. They've already partially fixed the problem before it occurred. Discoverability is the hard part of writing books. There are already something like 50 million books on Amazon, with about 10 million Kindle ebooks. Even doubling that amount with 10 million AI written books isn't going to change much. With 10 million books the trick already is that you have to be found by readers. The Amazon algorithms already work in such a way that bad books and books no one buys sink to the bottom where they're never found again. Which is what happens to most (if not all) AI books.

If Amazon changes their policy depends on 3 words: Bad Customer Experience

And at the moment that's not the case. Yes, the bad mushroom books suck for the handful of customers who bought them. But for the other 500 million Amazon customers it's just business as usual. No need to change anything.
 

Devor

Fiery Keeper of the Hat
Moderator
Incorrect non-fiction books are not the sole domain of AI novels though.

....

Yes, the bad mushroom books suck for the handful of customers who bought them.

Right, but the implication is that a bad mushroom book isn't just wrong but could actually be dangerous to the reader. Eating the wrong mushroom can kill a person. That's a new level of risk. Written books aren't so wrong as that.
 
Top