• Welcome to the Fantasy Writing Forums. Register Now to join us!

AI Trends

Demesnedenoir

Myth Weaver
Pure conjecture, of course. As far as art goes, I still think long-form voice and prose are going to be difficult as hell for AI, but short-form arts, in particular graphic designers, will be impacted most. Gradually, as AI is more accepted, it will beat the shit out of stock-art websites, and to be blunt, I don't see much difference between a graphic designer altering images from stock photos (which are shitty and repetitive) and utilizing AI images in a similar fashion (until AI too gets repetitive). But seriously, even here, AI has a ways to go when dealing with weapons and poses and it doesn't appear to be advancing that fast. I see AI as an assistant for people who want or need one and to plop out fun shit that I wouldn't pay for in the first place... silly art.

The work world, well, productivity could go through the roof with programming.

Wait and see.

I've been thinking that the emergence of these various "AI" tools may not be a portent of things to come after all, but might instead have been the end of innovation in that given field. Decades of cumulative work in different fields preceded the "sudden" emergence of these various text, art, voice and sound generators (or re-arrangers if you wish). Perhaps the explosion of innovation we saw at the start of 2023 might be the end result for decades to come.
 
  • Like
Reactions: Ban

Ban

Troglodytic Trouvère
Article Team
Pure conjecture, of course. As far as art goes, I still think long-form voice and prose are going to be difficult as hell for AI, but short-form arts, in particular graphic designers, will be impacted most. Gradually, as AI is more accepted, it will beat the shit out of stock-art websites, and to be blunt, I don't see much difference between a graphic designer altering images from stock photos (which are shitty and repetitive) and utilizing AI images in a similar fashion (until AI too gets repetitive). But seriously, even here, AI has a ways to go when dealing with weapons and poses and it doesn't appear to be advancing that fast. I see AI as an assistant for people who want or need one and to plop out fun shit that I wouldn't pay for in the first place... silly art.

The work world, well, productivity could go through the roof with programming.

Wait and see.
Good points. Now that the dust has settled (a bit), the fears might have been a bit overblown. I can't say I've used these new tools much beyond testing them out and having a quick laugh. The creative utility remains impressive in a "how did it do that?" sense, but not all that useful in practical terms. And I agree, from what I've gathered on how these generators/re-arrangers work, it seems long-form will remain their Achilles heel. More silly art and silly writing might emerge on the creative front, but the doom and gloom from the first few months of 2023 seem not to have manifested in something truly threatening. Unless there are major structural innovations waiting to be revealed, it does seem to me that the current state of the art shall remain for a long time to come. Still excited to see what utility these tools might have outside of the creative field.
 
That’s what AI feels like for me, a folly. It’s not real, but it can be fun. But what purpose does something have if it’s not real? Most people now can spot AI artwork, and it always has a strange off-kilter feel to it.
 

pmmg

Myth Weaver
The problem with the AI speculation is looking AI today and thinking it is the AI of tomorrow. It is too early to tell where AI will end up, and what effect it will have on all of us.
 

Demesnedenoir

Myth Weaver
The key is AI isn't intelligent, it's stupid and has zero imagination. It's awesome for creating quick joke content and can be used as a tool to help graphic designers create covers better than most stock photo crap. It can be used to create a coherent story one paragraph at a time with human oversight, but in the end, it's only as good as the human.

It will get better, and it will do some amazing things, both good and bad most likely, but the exponential growth some people anticipated as a doom loop is closer to 1 squared than 10 squared, heh heh. People love to look at trend lines and declare a future reality, but this tends to be more a foolish panic reaction brought on more by political expedience and power grabbing than reality.

That said, it needs to be watched closely for the day it does become intelligent, or when stupid people put too much faith and power into artificial control... the military and energy industrial complexes come to mind. Should writers strike to keep AI shackled? Yes. The fight needs to be fought before we've already lost the war, no doubt about it. But for the time being, I don't feel threatened enough by AI to get worked up over it. But I reserve the right to change my mind, heh heh.
 
Most people now can spot AI artwork, and it always has a strange off-kilter feel to it.
I disagree, as the whole Bob the Wizard cover proved. If you missed it, that was a cover, made with AI generated images, and it won a cover contest which had specifically stated that no AI images could be used. AI art has gotten to the point where if you use it to create elements of a cover image, then it's as good or better than many stock images. Yes people eventually found out it was made with AI help. But that took some professional artists digging deep to find out and prove. For the average consumer it was not just good, it was great.

Now the trick is that you don't want to ask it to create a whole cover for you. Rather, you create parts of the cover, just like you don't use just 1 stock image to create a cover but you combine multiple elements to create the scene you want.

Another area where AI images are wonderful is for Facebook ads. I've been experimenting with them, and they work wonderfully. You get what you want, in multiple variations. These are images that people only see for a few seconds (at most). AI art here is more than good enough. It's also cheaper and more varied and easier than stock images, and way cheaper than something artists might come up with.

I think that if you're a graphics designer, you need to figure out the role AI will play in your process. Now, you can go without of course. but it's gotten to the point where it can do much of what you can in terms of simply making things. Actually creating a composition or a complete cover is harder because it doesn't know what a cover is or why it works.

As for AI taking over long form writing, I think that is a very long way away. For 2 big reasons.

The first is that the feedback loop is hard, and there is not a lot of input. What I mean is that for an image, it's very quick and easy to point out what is wrong with an image. You've given that character 3 arms or 5 fingers. That's not an apple. That sort of thing. For long form writing it's harder to point out exactly what's wrong, and since AI doesn't know what a novel is, it's hard to explain why it's wrong to a computer.

The second is a lot more important though, and one people tend to forget. And that is money. There simply is very little money to be made in teaching AI to write novels. Say you've perfected your novel writing AI. Now, what are you going to do? As many indie authors will tell you, writing the novel is the easy bit, selling it is way harder.

Look at it this way. The average author makes something like $500 per year from their writing. Now, an average programmer earns something like $100.000 per year. If you're designing a system that can replace one of those, who are you going to optimize your system for?
 
Are you suggesting that because it’s much cheaper than paying a real human being then it’s already proving its value?

I can see where you’re coming from. I’m not against the use of AI art, I just don’t think it has any true value - it’s not real and it’s always the same derivative stuff. Too overly dramatic, and much of it still has that strange quality to it, for me.

Graphic design and illustration / artworking are not the same discipline. If you’re using AI to generate artwork, then it still lacks something crucial, which is design, which is to communicate.

Artwork heavy book covers for example are not my personal taste, you can communicate what a book might be about by omitting an image of the main character.

I thought that Bob the Wizard thing was that it was a combination of AI artwork and human manipulation? But either way, I don’t like the cover.
 

pmmg

Myth Weaver
The first is that the feedback loop is hard, and there is not a lot of input. What I mean is that for an image, it's very quick and easy to point out what is wrong with an image. You've given that character 3 arms or 5 fingers. That's not an apple. That sort of thing. For long form writing it's harder to point out exactly what's wrong, and since AI doesn't know what a novel is, it's hard to explain why it's wrong to a computer.

The second is a lot more important though, and one people tend to forget. And that is money. There simply is very little money to be made in teaching AI to write novels. Say you've perfected your novel writing AI. Now, what are you going to do? As many indie authors will tell you, writing the novel is the easy bit, selling it is way harder.

Look at it this way. The average author makes something like $500 per year from their writing. Now, an average programmer earns something like $100.000 per year. If you're designing a system that can replace one of those, who are you going to optimize your system for?

If/When AI moves to having access free access to internet content, these datasets are already present. It wont need to start at zero, just use existing stuff. The feed back loop is already there. It wont be we are teaching AI to learn the profitable stuff and not he unprofitable stuff, it will simply gain the ability to learn, and the cost of disciplines after that will not be a factor. We would have to be limiting AI, not hoping to expand it.

The bottom math is bad comparison for too many reasons to list. It will replace both, because it can.

What we banking on is that the current model for AI (the language model) is the only way AI can go. But people are working every day to make it a better, truer AI. Its first iterations will have issues, but the goal is a self aware, self learning system. If they achieve it, it wont need us.
 
Last edited:
Are you suggesting that because it’s much cheaper than paying a real human being then it’s already proving its value?
What do you mean by value here?

To be clear, I don't think AI art is art (though I have seen some amazing pieces). I do think the images AI generates have a lot of commercial use. And that is the value it has. I've been playing around with Facebook ads a bit. Ads require images. If I wanted to use stockphotos, then I would need to buy an expensive subscription to a stockphoto site. And that would get me access to the same standard stock images as everyone else has. Which all don't show exactly what I want or require a lot of image manipulation to get right. This means a lot of work and even higher cost to get my ad to break even. For me this is a losing proposition.

On the other hand, I can also use one of the free AI generators, or get a cheap enough subscription to something like Midjourney. With a bit of playing around I can get a lot of great images I can try. They're all unique, and I can even take one image and tweak it further. For me, for ads (which are throwaway images) it's a game changer.

As for the Bob the Wizard cover, that was indeed a combination of AI generated bits and human manipulation and input. Which is exactly how AI images at the moment should be used to create larger compositions. And it was considered good enough to win a cover competition. The value here for AI images is that it can generate some things faster and better than a human artist can. Damonza (one of the bigger cover creation services online) has been able to remove a character creation fee from their process because they can now use AI to create characters on their covers.

If/When AI moves to having access free access to internet content, these datasets are already present. It wont need to start at zero, just use existing stuff. The feed back loop is already there. It wont be we are teaching AI to learn the profitable stuff and not he unprofitable stuff, it will simply gain the ability to learn, and the cost of disciplines after that will not be a factor. We would have to be limiting AI, not hoping to expand it.
But AI already has that. Chat GPT was trained on internet websites.

There simply aren't enough good novels available for the current AI models to learn how to properly write one. A paragraph yes. Those are easy. Lots of paragraphs everywhere. But whole novels? Not a chance. There is something like 5 million novels on Amazon. If it had access to all of them, then that might just about be enough. However, I'm willing to bet most of those aren't great, so they could be discarded until you're left with less than a million. Don't underestimate the magnitude of the training data needed. Especially because we have no way of telling the AI model why a certain novel gets 5 stars while another gets 3 or 1.

AI needs some kind of way of telling what is great and what isn't. Tests have already shown that if you train the AI models like Midjourney or ChatGPT using AI data that they start breaking down fairly rapidly. It only takes 4 or 5 iterations before you end up with gibberish. That's because AI isn't intelligent. It doesn't know what a novel is. Or an apple or a hand. Or even a word. All it knows is zeroes and ones, and that if it gets asked a specific bunch of zeroes and ones then it should give another specific series of zeroes and ones as answers.

All this means that the easiest way to make an AI better is to specialize it. And if you want to make money off of it, you will want to specialize it in a specific are where you can make a lot of money. And then the value of what you can generate becomes a very big part of the equation. Which makes the comparison between an author salary and a programmer one very important.

As an example, I've worked for a company that built and sold a search engine. Now everyone knows search engines are free, right? After all, Google is a thing. However, it turns out that if you build a search engine specifically designed for lawyers, then that is worth money for lawyers, because it helps them work faster. And you can show them simple maths. You bill customers X per hour. If my search engine saves you 20 hours of work per year, then it saves you 20 * X per year. Which means that if I sell it to you for 10 * X, we both profit. That's how companies think. Same with a programmer. A programmer costs something like $100 per hour. If I can build a device which saves that programmer 50 hours per year, then that has a value of $5000. Easy sell if I then sell it to you for $500.

And of course, if we get to universal AI, which you can basically teach anything and is self aware and all that, then all bets are off and pretty much everyone needs to find a new job. However, the current generations of AI aren't anywhere close to that. They're glorified auto-complete machines. They know the next work because of advanced statistics, not because they know what those words mean. And iterating on these versions isn't going to get us there. That's simply not how this technology works.
 

CupofJoe

Myth Weaver
There was a piece on the radio a few days ago and CNET were interviewed. Early this year CNET had used an AI engine to make articles for their website. Soon they realised, and were told by others, that the articles were factually wrong.
The AI had been given free access to what ever it could find on the internet [and we all know how good that can be].
It was distilling what it had found and came back with unreliable stuff.
They took it off line, reset it and only let it look at CNET's data. At least now they know what it is looking at.
They also don't use the information directly but as a source for a human written articles.
To paraphrase what they said, AIs are great at finding the cheapest broadband deal in Wichita but not at explaining why you might want to go to Wichita in the first place.
 

Demesnedenoir

Myth Weaver
Semi-off topic. It's interesting that people tend to assume that machines will become self-aware entities that might as well be alive. If Data from Star Trek died, it would still just be a machine to me, less than an animal (my apologies to Brent Spiner). There are studies now showing people getting emotionally attached to technology like they would a dog or cat. Ok. Yikes. And, of course, we've been anthropomorphizing animals for a long time. We tend to do the same thing, it appears, with inanimate objects. It's an interesting quirk in human thinking to project humanism onto everything around them. In fact, this might be traced back to the creation of gods and mythology, projecting a "human" explanation onto lightning, creating a being to explain such power. One could take this line of thinking all the way from the pre-historic creation of gods, humans being responsible for the "wrath" of those gods to bring a great flood, to more modern thinking of humans' responsibility for hurricanes.
 

pmmg

Myth Weaver
There is a lot of thought already in the schools of philosophy about did God create man, or did man create God.

Humans are strongly wired to having empathy towards a human face. Give a robot a face, and humans emotionally attached will follow. Give it one that is expressive, and it will become more likely so. There will always be some that say 'robot, who cares', but they will have to exist with people who do care.
 

Ban

Troglodytic Trouvère
Article Team
Semi-off topic. It's interesting that people tend to assume that machines will become self-aware entities that might as well be alive. If Data from Star Trek died, it would still just be a machine to me, less than an animal (my apologies to Brent Spiner). There are studies now showing people getting emotionally attached to technology like they would a dog or cat. Ok. Yikes. And, of course, we've been anthropomorphizing animals for a long time. We tend to do the same thing, it appears, with inanimate objects. It's an interesting quirk in human thinking to project humanism onto everything around them. In fact, this might be traced back to the creation of gods and mythology, projecting a "human" explanation onto lightning, creating a being to explain such power. One could take this line of thinking all the way from the pre-historic creation of gods, humans being responsible for the "wrath" of those gods to bring a great flood, to more modern thinking of humans' responsibility for hurricanes.
If it walks like a duck and quacks like a duck... The assumption in those sci-fi works is that the artificial mind is constructed to function akin to a human mind. That's not anything even in the ball-park of real world capability, but I would find it more "yikes" not to sympathize with an artificial copy of humanity, regardless of the materials used. Again, not related to the real-world, but if we are to delve in such a tangent, let's.
 

Demesnedenoir

Myth Weaver
I am probably borderline sociopathic or part Vulcan, heh heh. People disconnecting with real people and connecting with fakes and the potential sociological ramifications are way scarier than people being uncaring about fake lives.
 

Ban

Troglodytic Trouvère
Article Team
I am probably borderline sociopathic or part Vulcan, heh heh. People disconnecting with real people and connecting with fakes and the potential sociological ramifications are way scarier than people being uncaring about fake lives.
Well sure, but in a hypothetical world where such true AGI's were to exist, you could simply opt to have empathy for both.
 

Demesnedenoir

Myth Weaver
Why would I have empathy for a machine's suffering or being turned off or breaking? Empathy for the person whose machine died, sure, that can be a pain in the ass and expensive. Waste valuable time. But for the machine? Nah. Outside of maybe, well, that's a shame.
 

Ban

Troglodytic Trouvère
Article Team
Why would I have empathy for a machine's suffering or being turned off or breaking? Empathy for the person whose machine died, sure, that can be a pain in the ass and expensive. Waste valuable time. But for the machine? Nah. Outside of maybe, well, that's a shame.
You're conflating things now, equating real-world machines with the hypothetical machines. We are now talking about AGI which you yourself brought up through the topic of Data, an AGI who can be presumed to function mostly like a human does with a mind constructed to resemble a human. The machine in this instance is a person for they have the mind of a human, so you're making an arbitrary distinction here between a person composed of flesh and a person composed of something else. Function over form.
 

Devor

Fiery Keeper of the Hat
Moderator
The second is a lot more important though, and one people tend to forget. And that is money. There simply is very little money to be made in teaching AI to write novels. Say you've perfected your novel writing AI. Now, what are you going to do? As many indie authors will tell you, writing the novel is the easy bit, selling it is way harder.

No, you wouldn't build an AI to write a novel that tries to compete with other novels. But you'd build an AI in an attempt to compete with the novel writing industry as a whole. How many people talk about writing the story they want to read. Well with a simple subscription fee you'll be able to do exactly that, without the work of writing it.


If it walks like a duck and quacks like a duck... The assumption in those sci-fi works is that the artificial mind is constructed to function akin to a human mind. That's not anything even in the ball-park of real world capability, but I would find it more "yikes" not to sympathize with an artificial copy of humanity, regardless of the materials used. Again, not related to the real-world, but if we are to delve in such a tangent, let's.

Does it walk like a duck and quack like a duck, though? ChatGPT is very impressive, but the root of how it's made is fairly simple, and not at all intelligent. It's basically guessing its way through every sentence.

But even in a hypothetical situation where we've created something "to function akin to a human mind," well, you're asking us to assume that the machine operates in some function that machines don't operate in. It's like an oxy moron. If it's somehow alive it's not a machine. And if it is, I'd really have to know how it works to be able to answer the hypothetical.

Nor is that where the science is heading. We're going to crash into bioengineering issues far sooner - where actual living tissue, even human brain tissue, is used in conjunction with computers. Why simulate a living brain when we can use the actual thing? But each part of the brain grown separately, like a brain with multiple temporal lobes but lacking other parts, altered on the genetic and structural levels for any given purpose.

I mean, people are freaking out over a nearly impossible true AI when a freaky BioEngineered horror show is just a few decades off.
 

Demesnedenoir

Myth Weaver
I was talking now as much as any hypothetical, showing studies where people are developing emotional attachments to machines NOW. It will only get worse. Referencing Data was just making a smart-assed point, his "death" if even death it could be called, wouldn't mean much of anything except the destruction of a really useful tool to me.

On the flip side of this, for me, I admit to great sadness on the death of animals... more so than with most humans for some reason. Illogical, but true.

And Devor's point is solid, bioengineering is likely even scarier. But hey! Dystopian fears are a powerful magnet for human brains.

You're conflating things now, equating real-world machines with the hypothetical machines. We are now talking about AGI which you yourself brought up through the topic of Data, an AGI who can be presumed to function mostly like a human does with a mind constructed to resemble a human. The machine in this instance is a person for they have the mind of a human, so you're making an arbitrary distinction here between a person composed of flesh and a person composed of something else. Function over form.
 

Ban

Troglodytic Trouvère
Article Team
Does it walk like a duck and quack like a duck, though? ChatGPT is very impressive, but the root of how it's made is fairly simple, and not at all intelligent. It's basically guessing its way through every sentence.
First problem here is that you're equating a hypothetical AGI to ChatGPT and assuming that I consider either to be remotely comparable. I do not, nor did I (intend to) imply that I do.
But even in a hypothetical situation where we've created something "to function akin to a human mind," well, you're asking us to assume that the machine operates in some function that machines don't operate in. It's like an oxy moron. If it's somehow alive it's not a machine. And if it is, I'd really have to know how it works to be able to answer the hypothetical.
Second problem here is that you're placing a distinction between machinery and life where there need not be one. I.e. that is not a consensus but a personal opinion. If I built a perfect replica of a human brain out of silicone and metal, than that would classify as a machine to me. That it is alive does not preclude its artificiality and lack of flesh.
Nor is that where the science is heading. We're going to crash into bioengineering issues far sooner - where actual living tissue, even human brain tissue, is used in conjunction with computers. Why simulate a living brain when we can use the actual thing? But each part of the brain grown separately, like a brain with multiple temporal lobes but lacking other parts, altered on the genetic and structural levels for any given purpose.
Demesnedoir raised the case of Data from Star Trek, who is humanoid AGI. I don't have to personally believe that an actual humanoid AGI can exist or will exist to engage with the hypothetical. This is a SFF forum after all. I am aware of bioengineering, but that is a topic beside the specific point here. An interesting discussion in its own right, but philosophically less so. I know a flesh and blood brain can contain consciousness (or utilise it, however this might turn out to be) so there is no reason to believe an enhanced brain could not.
 
Last edited:
Top