Will ChatGPT take your job? New program shows AI could be ‘competing’ for work: experts – National | Globalnews.ca
The development and growing popularity of ChatGPT shows artificial intelligence (AI) has the potential to compete with humans for jobs one day, experts say.
ChatGPT, an AI text generator released to the public last November, has quickly garnered widespread attention for its ability to produce ideas for song lyrics, poems and scripts. It also remembers previous texts so users can ask follow-up questions in a conversational way.
Recent reports suggest ChatGPT has the potential to pass the U.S. Medical Licensing Exam and perhaps earn an MBA from an Ivy League business school. A U.S. politician last week even delivered a speech on the floor of the House of Representatives written by the AI bot.
Read more:
ChatGPT – Everything to know about the viral, ‘groundbreaking’ AI bot
Read next:
Annie Wersching: ‘The Last of Us,’ ‘Picard,’ ’24’ actor dead at 45
These developments represent the potential AI has not only to be integrated into the workforce, but to possibly even oust humans from some jobs performed today, said Varun Mayya, CEO of software building company Avalon Scenes.
“At some point, it’s going to be something that is competing with you in the white-collar workforce,” Mayya told Global News.
“I don’t think it’s limited to just white collar, though. I think eventually it’s going to be everything.”
ChatGPT may threaten ‘intellectual labour’
ChatGPT is distinct for being able to generate material of varying expertise, ranging from high school to university-level compositions, while other online tools typically can only fix grammar, tone and clarity.
It was developed by OpenAI, a San Francisco-based startup that works closely with Microsoft.
ChatGPT can be practically anything the user makes of it — it can take on the role of a chef and provide recipes, make business plans for marketers, create press releases for public relations specialists or give advice like a therapist.
Several research papers recently examined the potential ChatGPT possesses. In a white paper titled “Would Chat GPT3 Get a Wharton MBA?,” University of Pennsylvania Prof. Christian Terwiesch gave it a final exam for an operations management course at the Wharton School of Business and found that the chatbot would have earned a B to B-.
In a separate pre-print study, led by doctors from medical startup Ansible Health, researchers found that ChatGPT “performed at or near the passing threshold for all three exams” needed to be licensed as a doctor in the U.S.
As for ChatGPT’s performance in the legal field, the chatbot earned a passing grade in the Evidence and Torts section of the Multistate Bar Exam, a December 2022 paper found.
“Technology hasn’t put everybody out of a job yet, but it does put some people out of a job, and this one’s really interesting because most of the job replacement that’s happened as a result of technological development has been manual labour — but this is intellectual labour that’s threatened here,” said Brett Caraway, associate professor with the Institute of Communication, Culture, Information and Technology at the University of Toronto, on Global News Radio 640 Toronto last week.
“This could be lawyers, accountants. … It is something new and it will be interesting to see just how disruptive and painful it is to employment and politics.”
Read more:
Inside the mind of ChatGPT: How artificial intelligence is changing the way we learn
Read next:
Quebec calls for resignation of federal government’s anti-Islamophobia representative
On Jan. 25, ChatGPT was used by U.S. Rep. Jake Auchincloss to write a brief, two-paragraph speech on a bill that would create a U.S.-Israel artificial intelligence centre, the Associated Press reported. His staff said they believe it was the first time an AI-written speech was read in Congress.
Auchincloss said he prompted the system in part to “write 100 words to deliver on the floor of the House of Representatives” about the legislation. Auchincloss said he had to refine the prompt several times to produce the text he ultimately read.
Jobs that involve content creation, like speech writing, can be vulnerable to AI replacement, Mayya said.
But he believes that AI can be integrated into the blue-collar workforce, threatening those jobs as well.
“You’re going to be able to put this AI inside robotics,” Mayya said.
“You have the hardware, which is maybe a robot that’s walking around … and then you’ll have software like ChatGPT or some other models sitting inside that’s driving the robot. Eventually it’s going to come for everything. It’s just a matter of when.”
ChatGPT flaws prevent it taking over jobs — for now
Despite its growing popularity, ChatGPT currently possesses several flaws that mean it won’t be replacing humans in the workforce right away, experts say.
The software isn’t accurate all the time. It can write plausible-sounding but incorrect or nonsensical answers, as pointed out by its creator. ChatGPT is powered by generative artificial intelligence, which conjures new content after training on vast amounts of data from various sources, including the Internet.
“Many people have called the Internet a bit of a dumpster fire, and certainly there is a lot of things on the Internet that you probably wouldn’t want to put into a technology that you trusted,” said Katrina Ingram, CEO of Ethically Aligned AI, a social enterprise committed to consulting and educating companies on artificial intelligence.
“ChatGPT has learned the lessons of the prior chat bots and it’s put in place some guardrails, but underneath all of that, there’s still a massive amount of training data that has racist, sexist, objectionable content baked into it because that is part of the Internet, and that is part of the training dataset that went into this product.”
When assessing ChatGPT’s performance in the Evidence and Torts section of the Multistate Bar Exam, the authors of the December 2022 paper said its accuracy on all portions of the test was only 50.3 per cent. Its score was still much higher than the 25 per cent baseline expected of random guesses, they noted.
Read more:
From deepfakes to ChatGPT, misinformation thrives with AI advancements: report
Read next:
‘Golden Jet’ Bobby Hull dies at 84
In his research, Terwiesch found that ChatGPT ran into trouble with performing basic mathematics, making mistakes in Grade 6 level math that can be “massive in magnitude,” his paper read.
ChatGPT doesn’t know the difference yet between accurate and inaccurate information, but it can be taught, Mayya said.
“There is a human somewhere sitting down helping ChatGPT, saying, ‘Hey, you’re not supposed to say that,’” he said.
“My worry is that today, it looks sort of like a toy, slightly useful in a variety of jobs, but I feel like eventually it’s going to be really powerful.”
How can ChatGPT be integrated into society?
Andrew Piper, a professor at McGill University in the department of languages, literatures and cultures, told Global News that he doesn’t see AI like ChatGPT becoming job replacers right away.
Rather, he views them as tools that can be integrated into the workforce.
“I see it getting integrated into particular processes or workflows and potentially allowing productivity to increase,” he said.
“The idea is that it can kind of enhance what you’re doing and get rid of potentially some of the sort of rote work that slows people down.”
Read more:
Can AI be used to help people with disabilities? Experts say yes, with the ‘right data set’
Read next:
Boris Johnson says Putin threatened U.K. with missile strike; Kremlin calls it a lie
For example, Ingram cited a friend who works as a copywriter who has been using generative AI as part of their job. She said it has helped her friend come up with ideas for content creation.
If corporations do decide to use these tools in their work, they should be transparent and indicate to the user that AI was involved in the creation of that content, like ingredients on a food label, she explained.
However, she questions as to why AI would be used to automate tools used for human expression, like writing.
“In the work that I’ve done, in looking at things that get automated, we tend to automate things that we don’t like to do. I find it really curious that we are deciding to automate creating art and writing,” she said.
“These are expressions of humanity that I think many people take pleasure in, and so I’m really wondering about that: Why do we want to automate those things in the first place?”
Businesses that develop these technologies need to think about the “social consequences” of their creations when they release them out into the public, Piper said.
“This is a developing technology and we have to kind of learn about it, learn with it and study it as we develop policy,” he said.
“One thing that’s frustrating as an educator is this thing has just been thrown out there in the world and we’re all running around cleaning up the mess. … Corporations that are developing this technology, applying this technology, really need to think through what the consequences are of the implementation that they’re thinking about.”
— with files from Global News’ Kathryn Mannie and Irelyne Lavery
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.