7 Comments
May 7, 2023Liked by Patrick Delaney

As a final note, super excited for the future of NPCs in gaming.

Expand full comment

And not to look too paranoid Patrick, but when you inevitably start experimenting with automating your responses with AI, I don't wish to be on the receiving end of those automations. Consider this me opting out haha.

Expand full comment
author

I will try my best. That being said, I think one of the points I'm trying to make is, I don't know if I will even be able to help it. I have noticed my usage of ChatGPT has already affected how I talk to people in real life. I am much more likely to use rhetorical techniques that are less likely to put people off. I have a sense that in the past I may have said some things to you in person that might have rubbed you the wrong way, when we were debating *whatever* political topic. However perhaps you may have simply disagreed with me. Sometimes it's difficult to disentangle these things. So now I'm left with...when I'm talking with friends or acquaintances in person on a topic of non-accord, do I overcompensate in one direction and try to make sure I'm forceful in my argument, so I don't do them disservice of gaming the conversation with politeness, so I'm being faithful to them as the, "real me?" Or do I try to learn from what ChatGPT teaches me, and just go with the flow and help everyone just have a nicer day even when we disagree on a topic? Is there a direction that constitutes being a, "better friend?" I'm not sure but I will try to at least reflect on this and keep it in mind.

Expand full comment

In my example below it's not so much not wanting to contradict one another, but not wanting to be grouped in with the "bad" people as defined by the consensus group.

Expand full comment

As an example, this sentence struck me as describing the current state of the world, not some future state:

"...you could have huge numbers of individuals unknowingly placated into just accepting truths that are not supported by any evidence, because unbeknownst to them, all of their friends and acquaintances have been trained to be just a little bit less likely to contradict one another"

Pick a sensitive topic that is taboo to discuss, and you often find a widely held consensus unsupported by evidence that people accept because it's impolite not to. Narratives about racism come to mind.

Expand full comment
author
May 8, 2023·edited May 8, 2023Author

Yes, being unwilling to solve what might be moral problems due to fear of not being accepted by a particular community could be problematic. I'm not sure how to solve that, but one idea from a technological perspective to try to hack at the idea, given LLM's are so pervasive now, would be the following: there's a Large Language Model leaderboard metric out there called, "SCRUPLES," which is designed to measure an LLM's capability to predict how a particular community may judge a particular scenario to be ethical or not. https://arxiv.org/pdf/2008.09094.pdf This was essentially trained on data from Reddit's, "Am I the Asshole?" So one way to perhaps, "hack," whether a particular community would respond to whether it's acceptable to even talk about a particular topic would be to run that kind of measurement against a database of ethical decisions generated by that community. If it is demonstrated that a particular community largely rejects an overly wide range of notions due to their own community ethical reasons, then perhaps there could be a meta-discussion around that community's SCRUPLES metric as a way to engage. If a community goes far too off the rails in terms of their SCRUPLES metric, perhaps one might just define them as effectively a trolling community and just not engage with them. If they are not really that far off the rails (e.g. if they are not a super overly-zealous community) maybe it's more appropriate to think that one might just disagree with them, that it's possible for any of us to be wrong, and to instead try to engage with them and try to attempt to learn and improve one's own point of view rather than disengage.

Expand full comment

Super interesting, Patrick. So much to discuss here, but interestingly this article more than anything has my dreading a future state where I have to guess whether online content I'm reading on places like reddit, or even in text conversations with friends, is part of some gamified social scientific experimentation involving AI. I predict a common prank will be to see for how long you can get some human to argue with a bot, and after the human wastes a full day of their life passionately arguing with a bot, the curtain will be pulled back to reveal everyone laughing. I've never been someone to argue online, but if I was, I know that this prank scenario would be in the back of my head, and would likely make me less likely to engage in passionate online dialog. Is that a bad thing? Not sure. Interestingly, the reason I don't engage in online dialog is that I already perceive it to be the equivalent of arguing with bots. People online are programmed with a certain set of information to "win", as you describe above, and the prank described above already exists in the form of trolling, where everyone "in" on the troll are laughing at the person who doesn't get it. All of this to say that I think your concerns about AI have already been playing out online amongst humans for years. AI will likely just amplify existing problems, which might be good, if it makes the problem more transparent?

Expand full comment