Now that so many of us are using Artificial Intelligence (AI) regularly, discerning when to disclose this information is tricky. In some ways AI seamlessly inserted itself into our lives, upending, disrupting, and transforming even the most basic tasks.
For example, it’s likely many of us use AI to assist with writing. However, AI may be involved in different stages of this process and to varying degrees. These stages could include creating a first draft of a work based on a prompt or a review of something existing. AI might write, edit, and finalize an entire work. However, there would have to be some human intervention, even if it was a minimal amount to adjust prompts. Is disclosure required for all of these uses?
Sometimes at work, I draft an email or message. Then I use Copilot (Generative AI tool) to help achieve the right tone, make sure the statements are clear, and check if anything might sound confusing. Other times I might use Copilot to draft an Executive Summary or Conclusion for a document I wrote. Copilot is terrific at providing high-level succinct summaries. It’s perfect for an Executive Summary and Conclusion.
But do I need to disclose that, especially if I’m proofreading and making final edits? And if so, how and when? And in which ways does using Copilot differ from doing a task I might normally have to do, or would potentially delegate to a student? In my mind the big difference is that Copilot can do it instantly with a relatively high level of accuracy.
Other organizations might use AI as Chatbots to provide automated assistance to customers. In some cases, this information is readily available. As soon as a chat opens a message appears alerting the customer that they’re communicating with a Chatbot. I find it’s easy to know when I’m communicating with a Chatbot, even without disclosure. In my experience, most Chatbots have a hard time understanding any question that isn’t about something really basic. Even then, misinterpretations are easy.
As we all become more familiar using AI, I’m hoping more guidelines and practices will emerge. Some places are starting to require labels for AI-generated content, especially images. However, as AI becomes better at replicating human work, discerning the difference will become more difficult. Without guidelines, the line between human and AI may become blurred beyond recognition.
