As we recently shared, generative AI holds a lot of potential for supporting creative work. And while we don’t want to replace ourselves (who would?!) we do see how AI may become increasingly useful for agency professionals in the years to come.
That said, we believe the responsible use of AI requires human judgment and oversight to avoid bias, misuse, and inadvertent risks of harm.
Many of us are feeling like we’ve let Pandora out of the box and must now deal with the fallout. So, while the technology is rapidly evolving and we are exploring the potential of AI, we must also address the shortcomings and potential pitfalls.
Our teams are actively engaging in internal testing and learning relative to generative AI within an ethical and legal framework. By following the six guidelines listed below, we will ensure that the use of generative AI aligns with our core commitment to the highest level of professionalism, decision making, and ethical conduct.
What are the guidelines we are adhering to and where did they come from? We are building upon several different shared frameworks, though the one we are most closely modeled upon is from the PR Council, which worked with our colleagues at the law firm of Davis+Gilbert.
We use caution when putting confidential client information into a generative AI tool or platform. For example, we will not use it to create the first draft of a press release about a new product, or create client business plans, or summarize internal confidential documents. Unless you are specifically working on a closed platform, anything you enter as a query has the potential to be used by the platform and shared publicly.
We also do not use AI created images or sound for anything but internal collaboration. As we are seeing with the multiple lawsuits now addressing copyright and intellectual property infringement, AI platforms will scrape anything and everything available to them which can result in work that is stolen, and work that cannot itself be copyrighted.
The use of AI to create or spread deepfakes, misinformation and disinformation is abhorrent to our values.
Generative AI tools are not always accurate. They are increasingly being unmasked for plagiarism, copyright infringement, and trademark infringement. And, since they are only as good as their sources, the information they provide is often inherently biased.
Human oversight is necessary for fact checking and sourcing. To this end, using the tool for the spurring of creative collaboration, rather than deliverable work, is a trend we are seeing in agencies.
In our view, it’s important that our clients know if generative AI tools are being used in any part of the creative process. Since all of our work is “work for hire” we cannot deliver materials to clients that they cannot own or copyright. This is why we use AI to shortcut storyboarding and spur creative ideas internally, it helps inform the creative process, but is not incorporated into any final work.
It has been somewhat bizarre, though perhaps predictable, to see the inherent bias in AI generated writing and images. Any results generated by AI require a close review by a diverse group to ensure bias is not supported in our work. Using AI to translate documents is not a perfect art, and so here as well, humans need to review results to ensure meaning and intent are accurate. As our creative work shows, we are committed to working with diverse talent. We will continue to do so to ensure all communities and voices are represented in our creative work.
All forms of communication, from media to advertising, have taken a hard reputation hit over the past decade. Ensuring our team upholds high ethical standards for our work is woven into our training, our values, and our culture.