The attention given to Generative AI and ChatGPT has increased significantly in recent times, particularly in the workplace, where their usage has become more prevalent. However, a major issue has arisen as employees often present AI-generated work as their own without proper training or employer approval. Salesforce1 conducted a survey of 14,000 workers from 14 countries, revealing that a significant number of employees were using AI without proper guidance or authorization. It is crucial to address this issue to ensure AI is used ethically and confidentially, with appropriate training and company approval.
The survey results show that 28% of workers already use Generative AI in their tasks, with half doing so without formal approval. Furthermore, 32% of employees plan to use it in the future. The use of Generative AI in the workplace is increasing, regardless of established policies or safeguards. Concerningly, the article states that 64% of employees have claimed AI-generated work as their own, and 41% would exaggerate their Generative AI skills to secure a job or position. Additionally, nearly 7 in 10 workers globally lack training on the safe and ethical use of Generative AI in their profession.
Safe Use of Generative AI
The danger of using untrained public AI models, such as ChatGPT and Bard, lies in the potential for sensitive information to be inadvertently saved and used in subsequent responses to unrelated prompts. To avoid this risk, users should refrain from including personal or organizationally sensitive information in their prompts, as the model may not intentionally use this data but may still inadvertently use it in its responses. In turn, this may result in confidential information, such as intellectual property, being disclosed to the public.
From a personal perspective, sensitive information such as financial information and addresses can end up in generated text to another prompt. This may expose the original prompter or user to identity theft or fraud. When using AI models, it always pays to err on the side of caution when providing information.
Ethical Use of AI
Ethics play a significant role in the application of artificial intelligence, akin to the introduction of other novel tools in society. The uncertainty surrounding their ethical boundaries necessitates an examination to distinguish between appropriate and inappropriate uses. Ethical considerations involve avoiding disclosure of sensitive information about others without informed consent. Moreover, transparency is crucial, ensuring that one does not claim the AI-generated work as their own or utilise it for deceitful purposes, such as cheating or plagiarism within academia.
Ethical use of AI also includes not giving models information that disparage an organisation or individuals, including minority groups for example. Adding this information to AI models can potentially lead to misinformation and bias in responses over time.
Use Critical Thinking
Users should be mindful that AI is a means to an end and requires proper oversight and management. It is not advisable to blindly rely on the output of an AI model without verification. All information generated by the AI should be thoroughly fact-checked to ensure its accuracy. As an example, a legal professional once used AI to create a legal brief, but it inadvertently produced fraudulent case citations2. Consequently, he was temporarily suspended from practicing law. Therefore, it is crucial to verify the data produced by the AI to avoid such pitfalls.
AI Policy Generation
Creating many policies is relatively straightforward, as they have been ingrained in the workplace for numerous years. Nevertheless, formulating a policy concerning artificial intelligence usage presents a considerable challenge for a multitude of organisations. This difficulty arises due to the lack of awareness surrounding all aspects of AI that warrant attention within a policy. Factors such as necessary training, ethical considerations, alignment with organisational values, and avoiding personal gains through AI must be contemplated.
As a result, an AI policy will likely necessitate frequent reviews, possibly every three to six months during initial stages, to ensure its accuracy and relevance with the progression of AI and expert-identified concerns. To remain informed on these matters, it is advisable to subscribe to newsletters centred on AI-related workplace issues.
Developing a comprehensive AI policy for your organization is crucial due to the growing prevalence of artificial intelligence in the workplace and various other aspects of life. Providing clear guidelines on the acceptable use of AI, similar to how a bullying policy outlines acceptable and unacceptable behavior, is essential. If you require assistance in drafting an AI policy for your workplace or volunteer organization, please do not hesitate to reach out to us.
- More than Half of Generative AI Adopters Use Unapproved Tools at Work – Salesforce 2023 https://www.salesforce.com/news/stories/ai-at-work-research/
- Disciplinary judge approves lawyer’s suspension for using ChatGPT to generate fake cases – Colorado Politics 2023 https://www.coloradopolitics.com/courts/disciplinary-judge-approves-lawyer-suspension-for-using-chatgpt-for-fake-cases/article_d14762ce-9099-11ee-a531-bf7b339f713d.html#:~:text=A%20Colorado%20lawyer%20has%20received,the%20improper%20use%20of%20AI