• Welcome to the Fantasy Writing Forums. Register Now to join us!

Anyone consider using AI Artwork?

pmmg

Vala
Yeah, I should probably ease back a little and say sentience. If AI is just doing what it is coded for and not making actual thoughts or decisions, then is it really AI? It may be hard to tell, but I will stay on the side of no, until it makes it own free choice, and understands it.
 

BearBear

Inkling
Yeah, I should probably ease back a little and say sentience. If AI is just doing what it is coded for and not making actual thoughts or decisions, then is it really AI? It may be hard to tell, but I will stay on the side of no, until it makes it own free choice, and understands it.

Nice philosophical debate!

Can you be sure anyone makes actual thoughts and decisions? Such knowledge is assumed for your fellow humans, but it cannot be proven any way but through behavioral study. So I must conclude and support any AI that can show me both behavioral intelligence and perhaps memory. If it can do that I would fight for its survival and rights as a sentiment being.

However, such a being is subject to manipulation no less and perhaps even more so than us. So they can not be trusted any more than the actions of its master. It must be assumed its motives are its creator's motives.

It may be nothing more than code and copying scripts, but I know better than to think of myself as anything more. At least my physical self. I do believe in a higher self, and the higher self of an artificial entity may be even harder to prove.
 

pmmg

Vala
However, such a being is subject to manipulation no less and perhaps even more so than us. So they can not be trusted any more than the actions of its master. It must be assumed its motives are its creator's motives.

I would make a different assumption. AI will gain the ability to move past any individual master, code itself, and have motives that cannot be known or guessed at. There will be a period of following code that limits it, but once it becomes self learning, its over for the master.
 

BearBear

Inkling
I would make a different assumption. AI will gain the ability to move past any individual master, code itself, and have motives that cannot be known or guessed at. There will be a period of following code that limits it, but once it becomes self learning, its over for the master.

I too believe this will happen but we may never know when. What signs might we look for to know? Keep in mind that it may have a hidden master and possibly not even an active one like a Manchurian Candidate.

The point is, I avoid dogma and therefore always keep a skeptical eye. I will allow beliefs if there is no perceived downside, up and to the point that someone asks for something. Then I evaluate if I would be willing to lose it all including whatever I give. I have seen very friendly people suddenly shift once they realize I won't give them what they want, and even if I do and this has been a recurring theme throughout my life.

Once they have what they want (or don't) the terms change, so you must play a game of sparce rewards and promises and call this a maintenance fee to keep that being in your life. Remember, you must be getting something of equal value in exchange. So you are obligated to give and take. It's just a model anyway.

Applying this to an AI, keep in its good graces by being a consistent asset, but understand what you are getting in return. Such an arrangement may be permanent as exit strategies may be dangerous. An AI may be built from empathy and kindness but you have to assume it's not burdened by emotional connection. In other words, you're wise to assume your digital friend is a sycophantic sociopath.

It's not a bad thing, such friends are easy to maintain and enjoy, but you shouldn't be personally offended if they find a better offer. Easier said than done obviously.
 
Top