Transforming PR and the ethical use of AI

New podcast episode with Dr Shannon Bowen

In this conversation, Shannon discusses her journey into public relations, emphasizing the importance of ethics in data usage and communication. She highlights the challenges posed by AI in the field, particularly regarding misinformation and data manipulation. Shannon advocates for a careful and ethical approach to using AI in communication, stressing the need for transparency and critical thinking. The discussion also touches on the future of communication in the age of AI and the importance of ethical frameworks to guide decision-making.

For other Women in PR episodes and summaries check here. For previous series check the SoundCloud, YouTube or your favorite podcast app.

Takeaways (AI generated with Riverside.fm)

  • Shannon’s journey into PR began with a focus on statistics and crisis management.
  • Ethical considerations are crucial in the use of data in PR.
  • Data manipulation can lead to misinformation and unethical practices.
  • AI is a tool that requires careful and ethical usage.
  • Generative AI can produce misleading information if not used correctly.
  • Transparency in AI usage is essential for maintaining trust.
  • The communication industry must adapt to the challenges posed by AI.
  • Ethical frameworks are necessary for guiding AI decision-making.
  • Public relations professionals should focus on strategic thinking rather than relying solely on AI.
  • The future of communication will be shaped by ethical considerations in AI.

Transcript (AI generatated with Riverside.fm)

Ana Adi (00:05.912)
So Shannon, wonderful to have you. Welcome to the show. Thank you for inviting me on Women in PR. I’m happy to be here. Shannon, we ask every guest on the show, whether practitioner, educator, or academic, how did they end up in PR? So that’s the first question for you. How did you end up in PR? I know you call it…

strategic comms, we can get to that in your preference of name. But still, how did you end up in this particular area in practice? I was very interested in doing statistics when I was an undergraduate and I got a master’s degree using a lot of statistical analyses, but I needed an application for my thesis. So I studied crisis management and I looked at crisis management in the automobile industry.

I found that to be a really compelling topic. And so I eventually started working for a statistical research firm after that. I knew that I didn’t want to stay forever in the professional world because I was also really interested in theory. And I wanted to study public relations and eventually management and communication ethics.

because I was facing a lot of ethical challenges in how those statistics were being used. The data that I was generating could be used in any number of ways, and sometimes I found those ways to be ethically questionable. And so I knew that was something I wanted to study. I eventually had the opportunity to be admitted to the PhD program at the University of Maryland.

and study with the Grunigs. So that’s how I got into the field. I found it endlessly fascinating, but definitely in need of ethical considerations. Can you share one of those stories where you saw statistical data being questionably used? Sure. It was all the time on a routine basis. But there was a congressman, for example, outside of

Ana Adi (02:19.746)
the Chicago area and I was asked to go in and change my statistical analysis to make it more favorable toward his campaign. And I thought that was not giving the congressman the data that he had asked for because it was not necessarily all favorable toward him. But I was told he was an important client. He had been with the research firm a number of years and we wanted to keep him happy. So

Unfortunately, I think that happens in the research world a lot more than it should. There are good and bad pollsters. There are partisan pollsters. There are those that are trustworthy and not trustworthy. And sometimes it comes down to who is paying the bill, unfortunately. And I thought that was not a good way to go about giving the congressman information.

but I didn’t have the ability to articulate those concerns against the president of a multimillion dollar research firm who was my boss. So it gave me a great opportunity to go back to graduate school and study ethics and teach people in our fields to articulate those concerns so that when they face an ethical dilemma, they can handle it with the ability to explain why doing the easy thing is not the same as doing the right thing.

Okay, so what was the right thing in this case? Playing devil’s advocate for a moment. In this case, when you have this politician, of course each and one of us would like to see positive data about ourselves, isn’t it? But what would have been the right thing to do in your view? In my mind, the right thing would be to give the person objective and balanced data, no matter what side of the political spectrum they’re on.

messages were not resonating for that particular congressman with a certain region of his district and a certain gender. He wasn’t resonating with women. And if he had been given accurate data, he could have re-tailored his message. He could have made a more concerted effort to reach women in his district and to reach those around the Lake of Chicago who were more affluent.

Ana Adi (04:43.468)
But if he’s not given the accurate information, if he’s only given positive information about what’s working well, he’s not going to know where he needs to retool his campaign and create better outreach with his messages. So I think it’s incredibly unfair to clients to give them only favorable data that’s biased in favor of keeping them as a client because he could have performed better with the electorate had he known where to

create more of an outreach with his message. Does that qualify as misinformation to you?

No, I think that it really qualifies as data manipulation. So I’m not sure if I would call it misinformation because people aren’t really receiving all of the information that they need, including the client. And so I think to manipulate data is at its very core, extremely unethical. And I specialize in the type of ethics that requires you to have a duty to the truth.

before you have a duty to a client or an employer. And I think if you’re intellectually honest and you have that duty to the truth, you’re bound to look at both positive and negative aspects of the data. And then you can decide strategically what you want to message upon. But if you don’t look at the whole picture, you’re starting out from a biased place to begin with.

I’d like to go a little bit closer to today where we talk about AI and how it’s the threat or the challenge that it might pose to spreading inaccurate information, misinformation, manipulated information. But before that, so I’ll come back to AI. You’ve mentioned this duty to truth and

Ana Adi (06:50.146)
When

maybe the essence of truth or the starting point is being up for debate. This is so important and this is why we need to train on ethics at the university level, but even prior to that and after that as well as part of our professional responsibilities because we often see trends and the tides of what’s desirable change.

It can even change according to what client hires you to do their data, to do their strategic communications. But ideally, you have to be someone who thinks about an objective analysis of what’s honest and what’s right, independent from what your clients want or from what someone wants to get out of a certain study.

You really have to think of it independently because that’s an overarching truth that goes beyond the situational concerns of your client or their desires for a public relations campaign or a certain press release. It really gives you a way to insulate yourself from their whims and their fancies and their desires. It gives you a way to argue back based on

a rational analysis of what we can see evidence based decisions leading to. And that gives you a way to be an independent and more objective counsel rather than being a hired PR flack, a hired gun, a mouthpiece. Our field has had some issues in the past, of course, with those types of scenarios.

Ana Adi (09:05.41)
But the people who rise to the highest echelons in strategic calm or public relations are those who can think independently and objectively counsel the management on what should or should not be done. Those are the folks who I think earn their way to the highest positions because rather than acting as someone who’s going to put out a message management wants, you can counsel on what that message

really should say and if there should be a message at all if it’s warranted or if you need to do some introspective analysis and policy building before you communicate and that leads us to the highest level of our function which is acting as an ethical conscience or an ethical counsel to those who have us in their management team or who hire us as a consultant to their management.

But that also means not committing to serving organizations or to focusing on the benefit, the primary benefit of the organization and then the others. Right. I think you can focus on that primary benefit, but you can also get external data and do some more research, even if it’s informal research.

to expand management’s thinking and strategic decision making. And that way you can insulate it from poor decisions or myopic decisions that only the management team wants. You can usually bring in some external perspectives or some different viewpoints for consideration that might improve your policy and might improve your strategic communications as well. OK.

Let’s go to ethics in in AI so artificial intelligence I know this is something that has been of great interest to you before it was a buzzword. And you and I have met in the past and one of our first encounters was when you were retelling to a group of equally excited and curious academics about how baffled you were in in your encounters with.

Ana Adi (11:24.046)
Silicon Valley executives discussing ethics. Do you recall that instance and that research? I absolutely do. And this is part of my experience that I found incredibly baffling, but very interesting. So it led me to continue that research stream. Since about 2013, I’ve been looking at

the AI and digital environment of ethical decision-making because that really is going to have a tremendous impact, not just on society, but on all of our lives personally. And when I was doing this research formatively speaking to executives in AI of various organizations, none of them

had ethics as part of their strategic or core considerations. None of them. No one at any organization I spoke with. And this is really an incredibly important concern because I spoke with someone, for example, who developed autonomous vehicle driving systems. And at this point in time, that developer told me we’d

do not think about ethics. Explicitly we don’t think about ethics. We have many other things to think about. We think about LIDAR sensors, camera angles, and algorithmic perfection. And for us, doing the right thing is to create a more perfect algorithm. It has nothing to do with what you philosophers think about. That’s for you. That’s not for the engineers.

And so this disconnect, I realized, was going to be something that had tremendous human impact. And it has. We saw that we’ve had fatalities from autonomous driving systems that were not perfect. They were designed with algorithms that didn’t detect things like a pedestrian walking beside a bicycle with a flat tire. That was not part of the algorithm.

Ana Adi (13:43.744)
And so that particular woman was killed by an autonomous vehicle. And in that sad case, we see why ethics should be a part of engineering and algorithmic perfection, even if it takes someone like me coming in from the outside as an applied ethicist to say, here are the concerns we have about this engineered system.

And that now brings us to large language models like the darling and the buzzwords since 2023. Loads of people are fascinated by these statistical machines, linguistic statistical machines and their outputs. And you’re not as excited, right? You’re more skeptical as far as I understand. And you see…

In the same ethical discussion and vain decision making, you do see a few challenges, but also opportunities there, and particularly for strategic communications, PR, and all connected persuasive, organized, whatever departments we want to call them. Yes. Can you?

Give us an example. you can can help me out and understanding what your position is when it comes to generative AI and misinformation? Absolutely. When we get to the large language models, we’re looking at data that has been essentially stolen from every original content creator across the entire Internet from.

book writers to blogs, magazine articles, legal documents. The LLMs are trained on stolen data. And now that we’re seeing some copyright legal cases finally emerge, we’ve started to understand the extent to which the LLMs have scraped everything they can get access to without attribution. And so

Ana Adi (15:54.336)
This is really scary and I have a number of examples that I can offer. But first let me put this in the context that generative AI is a tool. It’s not a perfect intelligence. It’s a tool just like a hammer in a toolbox and you have to know how to use it. Obviously if you take a hammer and put a nail into the wall to hang a picture that’s a great use. If you use it to smash your thumb that’s not a great use.

Generative AI is pretty similar. And I tell my students use this tool, but use it with great care and great suspicion because it hallucinates, which means it comes up with things that are not real. And much of what it uses is based on stolen copyrighted data. For example, in class, I have my students look up in

any AI that they use, most of them go with chat GPT, some use Claude or even character AI, but I have them look up what is the difference between public stakeholders and audiences. And sometimes when they use certain generative AI, it will scrape the publications that I have published on that topic in the Encyclopedia of Public Relations. So

they will actually parrot back my own words without attribution. And so this is an example of how AI can be misleading. You can be plagiarizing without knowing it. And for public relations people, this is career suicide, of course, if a client or an organization is hiring you to write something and you have plagiarized it even without knowing so.

You’ve just undermined your professional credibility and probably harmed the client and probably lost a job. So I think this is an important caveat. And when I say that AI hallucinates, what I mean by that is it will come up with data that’s not actually real because it doesn’t know the difference between real and fabricated when it’s looking at the connections of words and projecting based on a probability.

Ana Adi (18:15.042)
what the next words are going to be. So for example, I can tell an AI come up with a list of great publications that I can read explaining ethical parameters and artificial intelligence. When I do that, some of those publications may be real, but you’ll find that nine times out of 10, they’re fabricated. They’re not real publications. They look as if they’re real.

They’re incredible journals or they have textbook publishers, but these things do not actually exist in reality. And the AI has fabricated something that’s an answer to a question, but it doesn’t really exist when you go to the library and try to pull up these sources. So again, we have to be exceptionally careful. It’s funny that you mentioned that because I’ve experienced it on both sides.

as someone who was contacted to provide access to a publication that looked very fancy and a little bit in line with the research that I was doing around that time. And yet I was absolutely baffled, right? I had to go and I did go through all my archives and all my, you know, manuscripts and half papers and whatnot.

But this inquiry came for a particular journal, a particular time and date. And I was like, it’s either me not remembering this, which is highly unlikely, or this doesn’t exist. And so the conversation started with the person who contacted me was a student from somewhere. This happens for quite a bit. Then we figured that was AI leading them astray. Happens to me as well.

You know, we’re writing, we’re doing things. Every once in a while there’s a reference there that just doesn’t exist. And yet, wouldn’t it be nice if something like this would be written? But I think for comms, the bigger, I mean, you mentioned plagiarizing without knowing it. I think maybe the scariest, if you want, or the biggest challenges that people assume that in the future,

Ana Adi (20:36.916)
we’re going to still use websites and journalistic sources as the sources of truth and verification. Although the desire is that we would be using more and more these chatbots, conversational or not, voice or not, but there’s still somehow hope among many practitioners that I’ve met that these sources of truth are going to remain there. And I kind of think like,

If nobody comes and visits your website, if nobody comes and downloads your report, then at some point, this question is going to come, why bother do it? If it’s going to be appropriated and embedded anyways into a large language model. mean, there’s so many questions that I have related to that. The assumptions we make about work patterns, the fact that we’re still…

in agencies and communication departments, we’re putting so much more emphasis on producing information rather than considering what’s important, right? And doing less, maybe, with, you know, doing more with less. And with generative AI, we talk about productivity, which means flooding people with more information.

There’s no consideration, never mind, you started me ranting now, but there’s no consideration whatsoever of the user, right? There’s this whole great success of, my God, we’re going to have personalized information. And all I can think of is, my dear Lord, I’m going to be flooded with personalized information, which means I have two choices. I will just close everything or not read any. None of that is going to be.

I’m going to engage with this ultra personalized thing. Anyways, I assume we share similar concerns, but unlike me who just runs at you, you have set up an AI ethics advisory board. So there’s organized advice if you want, right? How did you set that up and how…

Ana Adi (22:54.238)
Is there a framework? if so, how can organizations use that to navigate their tricky AI challenges? Great questions. I set up this group because no one in our field was thinking futuristically. We think in terms of what are we doing in the next month? What are we even doing in the next quarter? And that seems to be it. And I realized that unless our field

thinks a little bit further into the future and longer term, we’re going to be left behind because AI is changing things at an incredibly rapid pace and we have to be ready not only to understand and use it, to advise our organizations and our clients on it. So the AI Ethics Advisory Board is a subset of AI and communication researchers on the digital frontier.

who are part of the Global Strategic Communication Consortium. That’s a group I set up of futurists and ethical thinking scholars. Essentially, an organization can come to us and say, we need training or we need help figuring out an AI policy. We need problem solving, decision making ability. We have some technical experts who are involved on the computer engineering front.

And we have a lot of communication ethicists who help think through these problems. So I’ve done training, for example, on the state government for South Carolina. The Office of Regulatory Compliance has a person in charge of data for each state agency and office. And they’re using AI data. They just don’t know how to use it ethically. They don’t know

what to buy, they don’t know what the parameters are. So I have trained them on ethical decision making models so that in their use, they can use deontology, they can talk about honesty, veracity, privacy, virtue, ethics, and the right to autonomy of citizens of the state of South Carolina. So hopefully those types of considerations are helping people make better decisions day to day on the job.

Ana Adi (25:16.642)
There isn’t a one size fits all ethical framework yet. We have a good one that’s been offered, as you know, by the European Union. We have others that are being put forth for various applications, such as in the Department of Defense. And I’ve developed one that’s fairly generic that’s going to be coming out in the Public Relations Review Journal. And I think it’s

It’s helpful to have a framework that can help you work through ethical decisions, but each encounter we have is going to be different. So we need a broad framework like the one I’ve offered on deontology, which looks at your duty. And that’s to be an honest person, to use veracity as a decision-making guide, to maintain dignity and respect of

all of the stakeholders that are involved, including those that are in the so-called anonymous data. Sometimes it’s not so anonymous. And then what is your intention with using this data? Are you proceeding with goodwill or are you trying to exploit a particular data subset for some type of advantage or return that might not be good in the long run?

So we have to think about these concerns when we’re using AI and when we’re looking at how AI gets its data set to make sure we’re not acting with bias, prejudice, or exploitative ideas in mind that we’re really proceeding with trying to serve our stakeholders and publics with good intention.

Ana Adi (27:04.382)
It sounds like a good path. think that same framework can apply to comms, right? This idea of intent and impact. So maybe the two of them can go hand in hand. Now, let’s do a little bit here to… know, ethical use of AI also has something to do with public trust, building trust in the tool.

Now, Claire Bourne, we’ve edited last summer a little book that it’s free to download on AI and PR before people were very happy to talk about what they’re doing in departments and there was very little research. But Claire Bourne as a researcher, she’s written about public relations and the digital. And one of the things that she’s saying is that communication practitioners are tech

tech friendly, right? But also that they’re very gullible to their own tricks. And one of the things that she points out is that for technology companies to thrive, they need to build hype and follow that hype with this fear of missing out, foremost, right? And that is what she says rather skeptically and cynically to a point that is driving investment, right? Because people…

fearing of missing out are going to use this tool as much as you’re going to have the people at the beginning of the curve who would want to be invested into something, right? Now for communicators, whether they’re on the FOMO train or on the hype train, there seems to be a trust, an element of trust and excitement in technology for good. And I think what she’s arguing is that

communicators more than anybody else should actually be the first skeptical people in the room asking, what’s this doing? Where is it coming from? What if this? What if that? So trust is central to AI adoption, even more so in comms. What are we doing as communicators?

Ana Adi (29:32.824)
How can communicators support AI if it’s a good idea or at least if it’s not a good idea, how can communicators go about AI in a way that is ethical, that is not destructive and that doesn’t show the mirror back to ourselves as the flawed human and societies that we are but rather takes us to a different place?

It’s a great question and we have to keep in mind always having this standard of not only what can we do, but what should we do. And if you have that question in mind when you’re using AI, it will definitely allow you to assess the AI a little bit more skeptically to ask where the data is actually coming from.

Almost all data is bias that’s used in AI. So you need to understand the biases, whether it’s a language bias or with a certain demographic group being left out or if you have multinational operations, you might not be getting any AI data about a tremendous part of your business, for example. So we have to think in terms of augmenting the AI to understand its deficits, but also to augment it

with other trustworthy forms of research and data. Going back to our social science research, our foundations in polling, talking to people in an informal manner, generating data on interest groups and NGOs that is probably not going to be represented in the AI that you’re using is also very important. As long as we think of it as a tool,

and not the ultimate decision maker, we can use AI. But we’re going to have to always ask that central question of not just what can we do, but what should we do with our data and our AI. And that will keep communications central and involved in these questions. And as long as we have generative AI, we need to disclose its use. We have to say when we’ve used it to create comms,

Ana Adi (31:51.422)
used it to create data and ideas. Sometimes it’s not good to start with the AI because it can limit our thought process. Sometimes it’s good to do some research, some brainstorming and come up with some strategic alternatives and then use the AI to augment that thought process. Research finds that it’s a little more limiting if you start with the AI.

it can strange your thinking. And so I think putting it into the toolbox is a perfectly good use of AI, but don’t let it drive where the toolbox is going. Right. So as you said, it’s a hammer, right? We need a few other tools. A hammer can do so much. But before I let you go, there are two things I’d like to ask you more. Because of course,

the big lure of AI and I’ve moaned about it, so now I have to ask you, the big lure of AI is an increasing productivity, right? So this is probably for communicators is the speediest way to get the job done considering that if you consider that comms has to do something with producing text and content, right? Then we do a lot.

But if you think about the opposite of that would be that if we can automate all this text creation, then why would we need communicators for anyways, when this whole communication is done for us and on behalf of us and then AIs can talk to one another and maybe take conversations. mean, maybe we can take time off. But while I think we still have to work as societies,

to get to that point where AIs can talk to one another and we can take time off. They do that, but we still need to work to survive. There’s this fear that AI in particular is going to oust comms, right? So it’s not a benefit for communicators, but rather is life threatening, is livelihood threatening. So how do you see

Ana Adi (34:08.622)
comes getting out of this? How should organizations handle these ethical challenges of job losses? Or how should agencies handle the ethical challenges of job losses? Maybe you have real examples of how organizations did that already. And what can we learn from that? Well, let me question the very basic assumption of AI.

making our jobs easier and more productive. In the communication industry, I’m not sure that’s true. I believe that in manufacturing, it’s probably true. So if you go to the BMW manufacturing plant upstate in Greenville, Spartanburg, South Carolina, AI drives the robots that assemble the cars. It runs the speed of the manufacturing conveyor belt.

and it does make production incredibly more efficient. So in that case, I think we do see that it’s revolutionized an industry and the jobs might not be lost, but transformed into the human that operates the robot and interacts with the AI rather than a human bolting tires onto the vehicle. And so that industry is definitely changed, but for communication,

Keep in mind that we have to fact check everything that AI gives us. We can’t rely on it to not plagiarize. Even if we tell it not to plagiarize, it may quote back a standard about plagiarism that is plagiarized from a university’s judicial office, et cetera. And so we can’t trust AI yet.

I don’t think we’re at the point of having it take over our jobs. In fact, I would say if you’re going to use it, you’ve probably added on several hours to your workflow because now you’re fact checking, you’re making sure the references it used exists, that the words in order are not plagiarized from an author. I think that creates a lot of headache for public relations professionals. However,

Ana Adi (36:23.562)
In the future, we are moving toward a general AI intelligence, and that’s called AGI, or a human-like thought process, where AI can then eventually learn to create its own word structures and not to plagiarize, to create new and original content and thought. But we’re not there yet. AGI does not exist. It may be tested on in a few laboratories.

They’ve had some successes in recent days with quantum computing that is able to independently think for a split second. So we’re not there yet. We’re many years away from that frontier. So I think public relations people shouldn’t be so worried about their jobs immediately. But I do think the industry is going to be transformed when we get to a point of AI being able to think strategically.

That’s a different story. But right now, AI is not able to help us think strategically. It can create a list of things that are already out there. So it amalgamates data like a tool. It’s good at pattern recognition on what’s already out there. Humans are good at imaginative and strategic thinking of what’s not already out there. So we’re still innovating, and AI is not

helping us innovate in comms, although it has had some successes in innovating in the biological sciences and virology in particular. However, in communication, the words that are out there at searching are again based on oftentimes stolen data. So thinking creatively, innovatively and originally is going to insulate you from potential harms and job loss from AI.

I should write that down. One more thing, you mentioned already the Global Strategic Communication Consortium and the fact that you have surrounded yourself with a bunch of inquisitive people that are equally excited about ethics and not put off by the challenge that they present. Instead of goodbye, is there a project that emerged from the consortium that is close to your heart or your

Ana Adi (38:51.158)
or you’re proud of or I don’t know maybe help solve a problem and you think this is the way to do it. You know, this is the way. Well, we have an upcoming book that I think might help solve some of these problems or at least help people think through them systematically. It’s called the Handbook of Innovations in Strategic Communication. It’s being published by Elgar early in spring of 25. And so

It has chapters by experts that go through these specific challenges. Some of them are wicked problems. Some of them are specific to genres such as advertising, what the future is going to look like with regards to transhumanism, artificial intelligence use in warfare, strategic communications, military applications. So we have a lot of chapters that can offer some guidance by

the top experts in the field. However, there are no easy answers. There are no easy solutions because the future is still unfolding, of course. But when you have a lot of incredibly intelligent people thinking about these challenges and problems, it can offer directions. And I think one of the one of the breakthroughs I would say that book has offered is in ethics, thinking about humility as part of our ethical

obligation and decision making because we don’t know the future. We can’t predict accurately what AI is going to be used for and what it will do independently if we get to an artificial general intelligence. So if we approach these questions with ethical humility and rectitude reflecting on what we’re doing and what we should be doing, I think that puts us in a better and safer place than if we charge forward

Adopting technology without thinking about the ethical ramifications of that use. But see, it’s so interesting that you say that. So watch out for that book. It sounds like it’s a thick one, by the way. But it’s so interesting that you say that because you’ve mentioned the EU framework and the EU AI Act. And from what I see in my several bubbles,

Ana Adi (41:16.084)
North American perspectives aren’t as excited about the European conservative views of AI, right? So this is what came to my mind when you said charging forward. To me, that sounds like a very American thing, which has been hailed and praised for driving innovation, know, this idea of breaking things and then figuring things out. Whereas that has been very much pitted against

the European view that was maybe a little bit more skeptical, more conservative, more unexcited if you were more cautious. I think that was the word that I was looking for. Loads of great questions. Our time is up. As usual, these conversations could go forever because they’re so exciting. So would the ranting, right? There’s so many things that we could get riled up about. Yes.

Shannon, thank you so very much for your time. This is an episode worth listening over and over and every time I bet that our listeners will take new notes in a different section. So again, thank you for making time and we’ll definitely keep in touch and don’t forget to buy that book.

Thank you for having me on Women in PR and thank you to the listeners for giving some time. I invite you to visit our website, Global Strategic Communication Consortium at the University of South Carolina, where you can learn a little bit more about our activities and you’re always welcome to get involved. And of course, we are happy to have feedback from your listeners. So if you have any questions, drop me an email. Thank you. All right. Well, that’s it for today.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.