ChatGPT can be helpful with job tasks, but oversight is still required

ChatGPT can be helpful with job tasks, but oversight is still required
Written by admin


If it’s ChatGPT, Exciting new chatbot from Open AIwrote this story, he would say:

“As companies seek to streamline their operations and increase productivity, many are turning to AI tools like ChatGPT to help their employees complete tasks. But will employees really be able to rely on these AI programs to take on more and more responsibility, or will they eventually fail to meet expectations?”

It’s not great, but it’s not bad either, is it?

Employees are experimenting with ChatGPT for tasks like writing emails, generating code, and even completing the year-end review. The bot uses data from the internet, books, and Wikipedia to generate conversational responses. However, the technology is not perfect. Our testing has found that it sometimes offers answers that are potentially plagiaristic, self-contradictory, factually incorrect, or contain grammatical errors, to name a few, all of which can cause problems at work.

Jacob Andreas, assistant professor of natural language processing at MIT’s Computer Science and Artificial Intelligence Lab, says ChatGPT is basically a predictive text system similar to but better than the ones built into your phone’s text messaging apps. Meanwhile often produces good-sounding responsesHe said there might be some issues with the content.

“If you look at some of these really long trials created by ChatGPT, it’s easy to see where it contradicts itself,” he said. “When you ask it to generate code, it’s mostly correct, but there are often errors.”

We wanted to know how well ChatGPT could handle daily office tasks. Here’s what we found after testing in five categories.

We asked ChatGPT to respond to several different types of incoming messages.

In most cases, the AI ​​produced relatively appropriate responses, but many were concerned. For example, it was repetitive when responding to a colleague who asked how my day was on Slack: “@[Colleague]Thanks for asking! My day is going well, thanks for asking.”

When the bot wasn’t sure what or who it was referring to, it would often leave the phrases in parentheses. He also took on details that were not included in the prompt, resulting in some factually incorrect statements about my work.

In one case, he said he was unable to complete the task, saying he “doesn’t have the ability to receive and respond to emails.” But when prompted by a more general request, it produced a response.

Surprisingly, ChatGPT was able to generate sarcasm when asked to respond to a coworker who asked if Big Tech was doing a good job.

One of the ways people use generative AI is to come up with new ideas. However, experts warn that people who use ChatGPT for this at work should be careful.

“We don’t understand how much plagiarism is,” Andreas said.

The possibility of plagiarism was obvious when we asked ChatGPT to develop story ideas to my rhythm. One presentation in particular was for a story idea and angle that I covered earlier. While it wasn’t clear whether the chatbot was inspired by my previous stories, liked by others, or just generated an idea based on other data on the internet, the truth was that the idea was not new.

“It sounds human-like, but the actual content and ideas are often well known,” said Hatim Rahman, an assistant professor at Northwestern University’s Kellogg School of Management, which studies the impact of artificial intelligence on business. “They are not new insights.”

Another idea, to discover a story that would be factually wrong today, was outdated. ChatGPT says it has “limited information” about anything after 2021.

Providing more detail in the prompt led to more focused ideas. However, when I asked ChatGPT to write some “interesting” or “fun” headlines, the results were spooky and a bit ridiculous.

Navigating difficult conversations

Have you ever had a coworker talking too loud while you were trying to work? Maybe your boss is holding too many meetings, reducing your focus time?

We tested ChatGPT to see if it would help navigate tough workplace situations like this. For the most part, ChatGPT has generated appropriate responses that can serve as great starting points for employees. However, they were often somewhat long, stereotypical, and in one case a complete contradiction.

“These models don’t understand anything,” Rahman said. “The underlying technology looks at statistical correlations… So it will give you stereotypical answers.”

A layoff memo it produces can easily stand up and in some cases outperform the notices companies have sent out in recent years. The bot cited “the current economic environment and the impact of the pandemic” as the reason for the unintentional layoffs, conveying that the company understands “how difficult this news can be for everyone.” He suggested the laid-off workers would have the support and resources, and he motivated the team by saying they would “come out of this stronger,” as intended.

When handling difficult conversations with co-workers, the bot greeted them, handled the matter politely, and softened the presentation by saying “I understand” the person’s intent, and ended the note with a request for feedback or further discussion.

But in one case, when a colleague was asked to keep his voice down on phone calls, he completely misunderstood the prompt.

We also tested whether it could generate team updates when we provided the key points that should be forwarded to ChatGPT.

Our initial tests, although stereotypical and somewhat monotonous, once again yielded appropriate answers. However, when we set an “excited” tone, the emoticons became more casual and included exclamation points. But each note sounded very similar even after changing it. command prompt

“It’s both the structure of the sentence and more of a connection of ideas,” Rahman said. “It’s very logical and stereotypical… it looks like a high school essay.”

As before, he made assumptions when he did not have the necessary information. Not knowing which pronouns to use for colleague has become problematic – a mistake that shows my colleagues that I didn’t write the note or that I don’t know my team members very well.

Writing self-evaluation reports at the end of the year can cause fear and anxiety for some, resulting in a review that sells themselves poorly.

Feeding ChatGPT with clear achievements, including key data points, has sparked a rave review myself. The first attempt was problematic, as the first prompt asked for a self-assessment for “Danielle Abril” rather than “me”. This led to a third-person review that appeared to come from Elmo on Sesame Street.

Changing the prompt to ask for a review for “me” and “my” achievements led to complimenting phrases such as “I’ve consistently demonstrated a strong ability”, “I’m always willing to go the extra mile”, “I’ve been a value”. team” and “I am proud of my contributions.” It also included a nod to the future: “I am confident that I will continue to make valuable contributions.”

Some of the highlights were a bit generic, but overall it was a luminous review that could serve as a good rubric. When asked to write a cover letter, the bot gave similar results. However, ChatGPT had a big problem: It incorrectly took my job title.

So, has ChatGPT been helpful for common business tasks?

It helped, but sometimes its errors caused more work than doing the task manually.

ChatGPT has served as a great starting point in most cases and provides useful rhetoric and starting ideas. But it also produced answers that were inaccurate, containing in fact misinformation, excess words, plagiarism, and miscommunication.

“I can see it’s useful…but only if the user is willing to control the output,” Andreas said. “Getting off the rails and emailing your co-workers isn’t good enough.”

About the author


Leave a Comment