Search

Platform live

Are You Comfortable With How Your Employees are Leveraging ChatGPT?

Date
Date
Date

Feb 20, 2025

Category
Category
Category

Insights

Since the momentous commercial debut of ChatGPT in November 2022 which eclipsed the initial adoption rates of many other significant digital platforms in the 21st century, AI has rapidly begun to feature in workplaces large and small. All manner of use cases have been popularized from the writing of emails and marketing content to the automation of support functions using generative chatbots. While significant efficiency gains have been made as a result, many businesses continue to grapple with a complex question - how best to standardize and regulate the use of Generative AI in a workplace environment. 

Many have yet to do so, with 70% of organisations lacking an AI governance framework, according to OECD statistics. In this post I will discuss the challenges that arise from employee use of ChatGPT, Gemini and similar large language models (LLMs) and emphasize why it is important to adopt a productized approach to AI in order to effectively address them and impose clearly defined guardrails.

The primary business challenges that arise when using AI in a workplace

As a business owner it’s highly likely you’ve already begun implementing AI within your business in some form, most likely by taking a generalist approach through the use of ChatGPT or comparable LLMS. If you have, then your employees are most likely busy using AI systems to draft all but the most basic emails, create reams of content, even analyze spreadsheets and other documents. Now your staff have incorporated AI within their workflows and they’re tapping away at their keyboards interacting with AI everyday, there’s been a significant increase in productivity without any apparent reduction in quality and that means there’s no apparent cause for concern, right? Unfortunately this is not entirely the case. Here’s 3 potential concerns arising from unregulated AI usage in the workplace which could result in reputational damage and/or loss of profits which you should be actively developing mitigation strategies to guard against right now.

1. Improper data handling and disclosure of sensitive information.

Many public LLMS such OpenAI’s GPT model may use data input for model training. Without a governance framework in place, your employees could be providing sensitive company information to an AI system which may be ‘leaked’ to the public at large as a result of the training process, should certain questions be asked by others that invoke the use of the information. This can easily happen if suitable precautions are not taken such as anonymization and de-identification of the data. Large corporations have already experienced this data issue first-hand. In April 2023, several Samsung employees were sanctioned for leaking confidential proprietary code to ChatGPT, along with company meeting minutes. On a related point, the implications of the use of AI-generated content for intellectual property rights are still unclear. Publication of AI output may expose IP that is related to your business or infringe upon the IP rights of others. It’s still somewhat of a legal minefield. 

2. Significant variations in employee competency levels.

Despite all the advancements in the capabilities of AI models in recent years, one thing remains constant, the quality of the output produced by any AI model is directly related to how well it is prompted at first instance. There’s a significant likelihood that your employees will have differing competency levels when it comes to AI prompting. Two employees may engage in the same task using an identical AI model, with a significant difference in the output quality arrived at, due to inconsistencies in prompting techniques. Employees with a high degree of AI competency may also leave your business; taking their skills, knowledge and prompt frameworks with them, and productivity will begin to tank. 

3. Production of content that does not align with your brand standards and messaging

The success of any inbound content marketing strategy depends on consistency in branding and messaging across all channels and touchpoints. AI may enable your employees to create content at scale but there’s no reliable way of ensuring the output from ChatGPT, Claude, Gemini or any other model is reliably on-brand and on-message from one piece of content to the next. Recent survey data from Mckinsey and Company affirms that as little as 27% of organizations require employees to assess the validity of AI-generated content before it is used which has to be considered an alarming statistic. There’s also the potential for new employees with little familiarity with your business to create content off-the-cuff from an LLM and use it to communicate with key stakeholders, the content being clearly identifiable as from AI and of relatively poor quality. In summary, output accuracy is a major concern with GenAI systems and it’s difficult to enact guidelines that employees can actually be relied upon to follow.

The Solution: Standardize AI Use Through Productisation

As we’ve discussed so far, failure to place some guardrails around how your employees use ChatGPT and comparable LLMS is likely to lead to serious challenges for your business - whether they arise now or later. From inconsistent content quality and patchy adherence to brand standards - through to data breaches, the risks are simply too great to ignore. However there is a solution – productise your high-value ChatGPT workflows. With a productised approach to AI usage in your business, improved control is possible and here’s 3 reasons why.

Creation of a standardized user experience. An AI-powered product enables you to use your high-performing prompt sequences to create high quality output, predictably and at scale. Employees do not have access to the backend, where the prompts are stored, which means there’s no need for a high level of expertise on the AI side and any valuable IP is safeguarded.

Uniformity of on-brand communication. When the AI capabilities of ChatGPT are upgraded to a distinct product, you have the ability to train the AI model to produce on-brand content as and when needed. Training ChatGPT in isolation often leads to inconsistent results and a nuanced approach to prompting is always needed. The bottom line is, with discrete AI products you have significantly improved control over the reference data and prompts used by the system to render the finished product, and that makes for higher overall content quality. 

Improved control over the data flow. While in the vast majority of cases it’s purely unintentional, employees can provide sensitive data to ChatGPT from time to time. With an AI-powered product, employees interact with a simple form on the frontend without submitting any data to the LLM directly, and the app can easily be trained on non-sensitive data aligned to the intended use case it’s built for. This way, important business data remains held in confidence and third party copyright violations are avoided. 

In Summary

While ChatGPT offers enormous potential and has already made a significant impact on productivity in many workplace environments, its use in business settings is not without a few concerns. As a business owner you simply have too much to account for on any given day to provide extensive supervision to employees on all things AI. Continual oversight is necessary in order to prevent inappropriate submission of company data and confidentiality breaches, and policing the quality of AI content takes valuable time away from other important aspects of the daily running of your business. 

Taking the complex workflows you are using ChatGPT for at present and transforming them into a suite of branded, purpose-built micro-applications is the best way to experience the productivity gains of AI without the hassles and ongoing reputational risks.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16