#FutureReadyHealthcare

Who We Are
Investor Relations
News
Careers
Indegene

Should ChatGPT be banned at workplaces?

26 May 2023

Recently, a number of global companies restricted the use of Generative AI tools like ChatGPT in the workplace citing risks of compromising official data and business interests. But risks of these tools outweigh their benefits?

Highlights

A blanket ban on the use of ChatGPT in workplaces may not be the right approach.
The cutting-edge AI tool harbours extraordinary capabilities and can greatly assist teams when used responsibly with appropriate caution and guidelines.
The biggest risk is using ChatGPT as a single source of truth for research and writing. If teams rely exclusively on ChatGPT then they risk producing inaccurate or sub-optimal work.

Earlier, Goldman Sachs, Citigroup and Samsung had banned the use of ChatGPT by their staff, a recent addition to this trend is Apple.

According to reports, Apple is worried that employees who use certain programmes may accidentally expose confidential information. Apple has reportedly also informed its employees not to use Microsoft 365 Copilot, a tool that automates the writing of software code.

It has been also reported that Amazon has also directed its employees to use the internal AI tools, instead of ChatGPT.

The move by Global IT majors clearly indicates that open access to AIenabled LLM tools like ChatGPT is creating havoc to companies’ information security.

To delve deep into the topic, ETHRWorld interacted with industry experts to understand various facets of the development.

Useability of ChatGPT at workplaces

In today's world, these cutting-edge AI tools harbour extraordinary capabilities and can greatly assist teams when used responsibly with appropriate caution and guidelines.

“But an outright ban is not a favourable approach. Similar concerns were raised years ago regarding the internet or cell phones, yet we now recognize the immense benefits it has brought us. The aim should not be to discourage any type of innovation, including exploring how AI can be leveraged for the benefit of our consumers; we just have to do it intelligently and securely,” said Abhishek Chauhan, a global BFSI and technology professional.

While sharing his view regarding the latest ban, Gautam Ghosh, an HR professional, opined, “I don't think ChatGPT or other Generative AI tools should be banned. AI is going to be a tool every role in the future will be working with. So, it's better to give access to employees to these tools. Not doing so might rob them of the chance to develop their skills for the future.”

Shivam Singla, Co-Founder & CEO, Leegality, also responded in a similar way. He added, “A blanket ban is not the right approach. ChatGPT and generative AI tools are incredibly powerful and can help teams tremendously.”

While sharing his industry experience, Singla said, “One of our team members, the other day, used ChatGPT and other generative AI tools to create a small Leegality game highlighting the pain of physical paperwork. Such a project would have taken months without AI.”

While sharing his industry experience, Singla said, “One of our team members, the other day, used ChatGPT and other generative AI tools to create a small Leegality game highlighting the pain of physical paperwork. Such a project would have taken months without AI.”

“To encourage a culture of innovation and experimentation, a prudent approach is to use Generative AI, along with having a robust governance mechanism in place,” said Pratik Maroo, VP - Products, Indegene.

Risks that employees using ChatGPT can pose

"The biggest risk is using ChatGPT as a single source of truth for research and writing. If teams rely exclusively on ChatGPT then they risk producing inaccurate or sub-optimal work. Earlier, there was a risk of confidential data being shared with ChatGPT. But now they've come up with a feature that prevents this. So, this risk is largely mitigated, according to Singla.

“ChatGPT is a great assistant, but it's not a replacement for the human element needed for work just yet. So, teams need to be careful not to place too much reliance on it,” said Singla.

Relying solely on ChatGPT for all data can entail risks such as inaccurate information, legal and compliance concerns, security and data privacy vulnerabilities.

“Information shared with an AI platform could expose a company to the risk of competitive or material non-public information, consumer or employee data or PII (Personal Identifiable Information), information related to intellectual property,” said Chauhan.

Hidden potential

“ChatGPT has value for any software product company which deals with data, coding and communication. I don't see this as a risk, but as an opportunity,” said Singla.

Maroo of Indegene also shared similar insights. According to Maroo, "Generative AI, and arguably its most popular avatar today - ChatGPT, has the potential to revolutionise the life sciences and healthcare industries.”

Indegene recently crowdsourced ideas via an internal employee innovation contest on ChatGPT. Employees participated with great enthusiasm, and some of the submissions were outstanding, according to Maroo.

“A good number of these (submissions) have the potential to be piloted and scaled for specialised business scenarios. One of the winning entries was a smart chatbot to register patients into a health programme to avoid or reduce errors, delays and inefficiencies in the patient registration process,” said Maroo.

But, according to Chauhan, excessive reliance on AI generative tools can pose a significant threat across industries.

“Having worked in the BFSI sector for over a decade, I strongly believe that leaders must remain vigilant about the potential risks associated with these advanced technologies. Industries dealing with customers' confidential information should exercise additional caution. However, it is worth noting that these concerns are continuously monitored to proactively mitigate future risks,” said Chauhan.

Need for companies to develop guidelines

Companies need to have strict guidelines that prevent over-reliance on AI tools by employees. Organisations may also need to look at their confidential information policies and see that their usage of ChatGPT does not violate this, according to Singla.

If well planned, the guidelines can ensure a responsible and effective utilisation of AI technology within the organisation. They can also address accuracy and quality control, legal and compliance considerations, ethical usage, security and data protection, as well as provide employee training and awareness.

“By establishing such guidelines, companies promote responsible AI usage, minimise risks, and maximise the benefits of AI tools for their employees and the overall organisation,” added Chauhan.

For more information, please visit:
https://hr.economictimes.indiatimes.com/news/workplace-4-0/should-chatgpt-be-banned-at-workplaces/100514695

Read all News