In shaping the future centered around artificial intelligence, Edward Kim, co-founder and technical lead of Gusto, presents a unique perspective. He argues that downsizing the current team and hiring a batch of specially trained AI engineers is not the right approach.
Kim stated in an interview with TechCrunch that non-technical team members might understand customer scenarios and confusions better than average engineers, making them more qualified to guide what features should be built into AI tools. Gusto's approach involves having non-technical members of its customer experience team write "recipes," which instruct its AI assistant Gus (announced last month) on how to interact with customers.
Kim also mentioned that the company has found "those who are not software engineers but have a bit of technical savvy can build truly powerful and game-changing AI applications," such as CoPilot—a customer experience tool launched to Gusto's CX team in June, which now interacts 2,000 to 3,000 times daily.
Kim said that Gusto can actually enhance the skills of many within the company to help them build AI applications. This interview has been edited for clarity and understanding.
Gus is Gusto's primary AI feature for customers, integrating many previously built point functions. In the application, you'll start to see them filled with AI buttons, like "press this button to do something with AI." Gus allows you to remove all these buttons; when we feel Gus can do something valuable for you, it can appear unobtrusively and ask: "Hey, can I help you write a job description?" This is a cleaner way to interact with AI.
Kim believes that software programming is not accessible to most people; you must learn to code and attend school for years. Machine learning is even more inaccessible because you must be a highly skilled software engineer with data science skills, knowing how to create artificial neural networks, etc. But the biggest recent change is that the interfaces for creating machine learning and AI applications have become more accessible to anyone. In the past, we had to learn the computer's language and go to school for that; now, computers are learning to understand humans more. This might not seem like a big deal, but if you think about it, it makes building software applications more accessible.
Gusto's practices show that even non-software engineers with a bit of technical savvy can build truly powerful and game-changing AI applications. The company is leveraging its support team to expand Gus's capabilities, and they don't need to know how to program. The interfaces they use now allow them to do what software engineers have been doing without learning to code. I could give an example if you'd like.
At Gusto, even those without a technical background can find ways to leverage their domain expertise, especially the customer support team, to help build more powerful AI applications, particularly enabling Gus to do more and more. Whenever the customer support team receives a support ticket—meaning one of our customers contacts us because they want our support team to help with something—if it recurs, we've actually had the customer support team write a recipe for Gus, meaning they can actually teach Gus without any technical abilities. They can teach Gus how to guide customers to solve problems, sometimes even taking action.
Gusto has built an internal interface, an internal tool where you can write instructions for Gus in natural language, telling him how to handle such situations. In fact, the customer support team has a way to instruct Gus to call a certain API to complete tasks without coding.
There's a lot of discussion about "we're going to eliminate all these jobs in this field, we're hiring these AI experts, we're paying millions of dollars because they have this unique skill set." I just think that's the wrong approach. Because the people who can actually advance your AI applications are those with domain expertise, even if they might not have technical expertise. Gusto can actually enhance the skills of many here, helping them build AI applications.
Kim emphasizes that the choice between ICL and IFT depends on various factors, including available resources, data volume, and specific application needs. In any case, the study highlights the importance of high-quality training data for both methods. The study titled "Is In-Context Learning Sufficient for Instruction Following in LLMs?" will be presented at the NeurIPS conference in 2024, with related code available on GitHub.