Is artificial intelligence actually useful in the real world? Is it worth paying extra for this technology?
One positive answer is supposed to come from customer service call centers, where AI has the potential to either replace or supplement legions of human employees handling questions from confused and sometimes grumpy consumers.
Earlier this year, startup Klarna said an AI assistant based on OpenAI's models was doing the equivalent work of 700 full-time customer-service agents. Last week, Microsoft CEO Satya Nadella cited what's happening in contact centers with its Dynamics software as an example of AI being deployed successfully.
The problem is that no one really wants their customer service questions handled by machines. Not even the young'uns.
That's according to research from Morgan Stanley, which has been closely tracking AI adoption this year.
Ask the interns
The investment bank surveys its interns from time to time, to get a gut-check on tech usage from younger people who will grow into tomorrow's big consumers.
The bank recently asked these interns about using AI-powered customer service agents. The results were not pretty. It's another warning to the tech industry about the potential limits of AI adoption in practical situations.
- The majority (93%) prefer to talk to a human when it comes to solving a query
- 10% said AI chatbots never solve their problems
- 75% said chatbots fail at least half of the time to solve their problem
Morgan Stanley's analysts noted that AI models should improve, helping machines to solve more customer-service questions and complaints. But they also highlighted another risk.
"In many cases, technology improvement in and of itself cannot force behavioral change that is generally slow and iterative — particularly emotionally-driven complaints or trust-centric conversations," they wrote in a note this week to investors.
This makes sense intuitively. When you have a problem, especially one involving something you paid real money for, you want to be heard by a human who feels your pain and is capable of fixing the issue asap, ideally by cutting through red tape and just getting it done.
The AI reality
The AI reality is nowhere near that at the moment. Take Klarna's AI customer service agents.
Software engineer Gergely Orosz tried this Klarna technology out by calling up with questions.
"Underwhelming," was his conclusion.
When he asked about something, the AI bots regurgitated information that was already available from Klara.
Anything beyond that?
"I'm boom talking with a human agent," he wrote.
While those working within tech are well-versed in the possibilities of AI at this point, the reality is that many organizations aren’t realizing the true potential of AI by incorporating it into the workplace.
That’s according to the most recent Slack Workforce Index which established that office workers are spending 41% of their time on tasks that are “low value, repetitive or lack meaningful contribution to their core job functions.”
Conversely, the vast majority (81%) of those who have adopted AI tools to automate certain tasks have experienced an increase in productivity.
So, why the disconnect? One of the biggest barriers to the widespread adoption of AI is leaders taking a “wait and see” approach, says Aytekin Tank, CEO and founder of Jotform, an online form builder.
“Unsure of the extent to which AI will transform the workplace, some leaders are hesitant to shift their approach toward AI,” he offers. “You might remember when Snapchat first took social media by storm. Some businesses waited to see whether it was sticky enough. Ultimately, we discovered that the answer was ‘yes’, but concerning a specific demographic.
“AI, however, isn’t just a social media platform, and it’s not just impacting a limited demographic. It’s overhauling how work is done and requires more than just training sessions. It requires adopting a ‘systems thinking’ way of looking at your daily tasks. Within organizations, that kind of fundamental shift must come from the top-down.”
Don’t get left behind
For workers who fear they are being left behind and worry their current skillset will soon become obsolete because their current employer isn’t investing in the future (or their future), there are several courses of action to take.
For starters, why not do your own research and present your findings to your line manager or HR department about how adopting some simple AI tools can increase productivity and output?
Next, investigate ways you can upskill in your own time. There are several ways to start implementing AI productivity tools into your everyday work. For example, Otter.ai allows you to transcribe meetings in real-time, ideal for sifting through your notes and picking out the action points that need to be addressed afterward.
Similarly, Grammarly and Jasper AI are writing assistants that can help you write more efficient emails, or compile more succinct presentations, internal memos, or marketing collateral.
Or if your role involves project management, why not give Notion a try. It can autofill databases, summarize documents, and even assign tasks based on the data that has been entered.
Tank adds: “Employees will need to build some slack into their schedules. But once employees set their automation machines in motion, automating more and more of their daily tasks, they’ll recapture time for more meaningful tasks. And as I’ve seen with our employees at Jotform, when employees aren’t drowning in busy work, they have more time and mental space for creative work.”
Tank’s advice correlates with research conducted by GitHub which found that developers who started using its AI Copilot tools were able to complete tasks 55% faster.
The vast majority (88%) felt they were more productive, 96% said they were faster and therefore could get through more work, 59% found coding less frustrating, 87% found repetitive tasks less mentally taxing, and crucially, 74% had more time to focus on more rewarding tasks.
However, if you feel as though your AI upskilling endeavors are falling on deaf ears and your current employer isn’t receptive to change, it could be time to start reevaluating your employment options–the World Economic Forum predicts that automation could disrupt 85 million jobs globally in medium and large businesses across 15 industries and 26 economies as soon as 2025.
Additionally, executives interviewed by the IBM Institute for Business Value estimate that 40% of their workforce will need to reskill over the next 3 years to stay relevant.
The JavaScript Object Notation (JSON) file and data interchange format is an industry standard because it is both easily readable by humans and parsable by machines.
However, large language models (LLMs) notoriously struggle with JSON — they might hallucinate, create wonky responses that only partially adhere to instructions, or fail to parse completely. This often requires developers to use workarounds such as open-source tooling, many different prompts, or repeated requests to ensure output interoperability.
Now, OpenAI is helping ease these frustrations with the release of its Structured Outputs in the API. Released today, the functionality helps ensure that model-generated outputs match JSON Schemas. These schemas are critical because they describe the content, structure, types of data, and expected constraints in a given JSON document.
OpenAI says it is the No. 1 feature developers have been asking for because it allows for consistency across various applications. OpenAI CEO Sam Altman posted on X today that the release is by “very popular demand.”
The company said that its evaluations with Structured Outputs on its new GPT-4o score a “perfect 100%.”
The new feature announcement comes on the heels of quite a bit of excitement at OpenAI this week: Three key executives — John Schulman, Greg Brockman, and Peter Deng — suddenly each announced their departure, and Elon Musk is yet again suing the company, calling the betrayal of their AI mission “Shakespearian.”
Easily ensuring schema adherence
JSON is a text-based format for storing and exchanging data. It has become one of the most popular data formats among developers because it is simple, flexible, and compatible with various programming languages. OpenAI quickly met demand from developers when it released its JSON mode on its models at last year’s DevDay.
With Structured Outputs in the API, developers can constrain OpenAI models to match schemas. OpenAI says the feature also allows its models to better understand more complicated schemas.
“Structured Outputs is the evolution of JSON mode,” the company writes on its blog. “While both ensure valid JSON is produced, only Structured Outputs ensure schema adherence.” This means that developers “don’t need to worry about the model omitting a required key, or hallucinating an invalid enum value.” (Enumeration value is a process that names constants in language, making code easier to read and maintain).
Developers can ask Structured Outputs to generate an answer in a step-by-step way to guide through to the intended output. According to OpenAI, developers don’t need to validate or retry incorrectly formatted responses, and the feature allows for simpler prompting while providing explicit refusals.
“Safety is a top priority for OpenAI — the new Structured Outputs functionality will abide by our existing safety policies and will still allow the model to refuse an unsafe request,” the company writes.
Structured Outputs is available on GPT-4o-mini, GPT-4o and fine-tuned versions of these models, and can be used on the Chat Completions API, Assistants API and Batch API, and it is also compatible with vision inputs.
OpenAI emphasizes that the new functionality “takes inspiration from excellent work from the open source community: namely, the outlines, jsonformer, instructor, guidance and lark libraries.”