(+44) 020 3445 6275
info@ricciarelli.eu
23 Av. René Coty, 75014 Paris (France)

Follow us :

ActualitéCarnet de Julien Ricciarelli-BonnalWhy Some Companies Still Ban ChatGPT for Their Employees

9 March 2026
Julien Ricciarelli-Bonnal

Written by Julien Ricciarelli-Bonnal

9 March 2026

Why Some Companies Still Ban ChatGPT for Their Employees

Since the explosion of generative artificial intelligence in 2023, tools such as ChatGPT, Gemini or Claude have quickly become part of professional conversations. In many companies, some employees already use these technologies to write texts, analyze information or prepare documents.

Yet despite the growing interest in these tools, some organizations have chosen a much more cautious approach. In several large companies and institutions, the use of ChatGPT is still prohibited or strictly regulated.

At first glance, this situation may seem paradoxical, especially at a time when artificial intelligence is often presented as one of the major productivity drivers of modern work.

The sensitive issue of confidential data

The main concern for companies relates to the management of sensitive data. Artificial intelligence tools operate by analyzing the information provided by users. When internal documents, business strategies or customer data are entered into these systems, some organizations worry that this information could be exposed or used in an uncontrolled way.

Even though technology companies claim that data is protected and that conversations are not used to train models in certain professional versions, the issue remains sensitive for many IT and legal departments.

In sectors that are particularly exposed, such as finance, healthcare or public institutions, data protection is an absolute priority. In this context, some companies prefer to temporarily prohibit the use of these tools while they assess the potential risks more carefully.

Responses that are not always reliable

Another concern relates to the reliability of the responses generated by artificial intelligence. Systems such as ChatGPT rely on statistical models capable of producing very convincing answers, but these responses can sometimes contain errors or approximations.

In personal use, these limitations are relatively easy to manage. In a professional context, however, incorrect information may have more significant consequences.

Some companies are particularly concerned that documents produced with the help of artificial intelligence could be used without sufficient verification. This could lead to the circulation of inaccurate information or decisions based on incomplete analysis.

For this reason, some organizations prefer to limit the use of these tools until their teams have been properly trained to work with them.

A technology evolving at great speed

One of the major challenges associated with artificial intelligence is the speed at which these technologies evolve. New versions of models are released regularly, each bringing more advanced capabilities.

For companies, this rapid evolution can make it difficult to establish clear and lasting internal policies. IT departments and security teams must constantly adapt their rules to keep pace with technological change.

Some organizations therefore choose a gradual approach. Instead of immediately allowing these tools across the entire company, they prefer to test certain applications in controlled environments.

In this context, some companies conduct a strategic audit to identify the most relevant uses of artificial intelligence and the potential risks associated with its adoption.

A question of corporate culture

Beyond technical aspects, the introduction of artificial intelligence into everyday work also raises cultural questions.

Some companies encourage experimentation and rapid adoption of new technologies. In these organizations, employees are encouraged to test artificial intelligence tools in order to identify the most useful applications.

Other companies adopt a more cautious approach, particularly when operating in highly regulated sectors or when dealing with sensitive information.

The way an organization approaches these technologies often depends on its internal culture, its level of digital maturity and the nature of its business activities.

Companies developing their own AI tools

Faced with concerns about data confidentiality, some large organizations are choosing a different path: developing their own internal artificial intelligence systems.

By using models installed on their own infrastructure, these companies can maintain full control over the data processed by AI tools.

This approach allows them to benefit from artificial intelligence while limiting the risks related to sensitive information.

However, building and maintaining such infrastructure requires significant resources, which means it is currently accessible mainly to large organizations.

Adoption will likely continue to grow

Despite these concerns, the use of artificial intelligence in companies continues to expand. Many professionals are gradually discovering how these tools can help them save time in certain tasks such as drafting documents, analyzing information or preparing summaries.

In marketing and communication, these technologies can also facilitate certain stages of content production or support broader digital transformation projects, such as website creation, search engine optimization or trend analysis.

For companies, the challenge is therefore to find the right balance between technological innovation and risk management.

Toward clearer AI governance in the workplace

As artificial intelligence becomes more integrated into professional environments, many organizations are working on clearer internal policies.

These guidelines usually define what types of information can be used with AI tools, the situations in which their use is allowed and the best practices employees should follow.

Rather than permanently banning these technologies, the trend now seems to be moving toward a more structured framework for their use.

In the coming years, artificial intelligence will likely continue to integrate gradually into professional tools. Companies that manage to define clear and controlled uses will be better positioned to benefit from these technologies while limiting the risks associated with their adoption.

Written by Julien Ricciarelli-Bonnal

9 March 2026

23 Av. René Coty, 75014 Paris (France)
(+44) 020 3445 6275
info@ricciarelli.eu

Follow us :

GET IN TOUCH

A project in mind? An idea taking shape? Ready to move forward? We’re here for you.

Copyright © Ricciarelli Consulting 2025