AI Expert Machine
Skills for Success
Cognition
User Research
Timeline
Product Roadmap
leaderboard
Define KPI's
leaderboard
A/B Testing
Notable Outcomes
5 sec
response time
700%
more efficient in
information discovery
information discovery
83%
completion rate
The Problem
I was scrolling through the Confluence pages, both videos and notes, about my product, the Global Notification Engine. I felt very confused by the microcosm that was Ingram Micro. How did the different teams connect to one another? What do the different acronyms mean? What teams are involved in what platform and how do they contribute to the overall Ingram architecture?
I noticed as the company size increases, the learning curve increases for interns and new associates. Information discovery becomes harder as there is more information to go through.
I noticed as the company size increases, the learning curve increases for interns and new associates. Information discovery becomes harder as there is more information to go through.
The Solution
We built an AI expert machine that is an expert on all things Ingram Micro. Think of it as ChatGPT but it takes Ingram Micro content and can apply it to the context of the external world. This serves as the users’ personal assistant to gather the information they need without leaking company data.
The Vision and Roadmap
The vision for the expert machine is for to be housed in the center of all teams of Ingram Micro. We will first train the model using Confluence and Sharepoint data for the new users who are onboarding to Ingram Micro. Then, we planned to integrate with other company platforms so they can obtain the information they need without setting up meetings with respective stakeholders.
The Users
At first, the target is for users newly onboarding to Ingram Micro. The potential of this product extends beyond just a knowledge base, so after integration with other company platforms, the users will be internal associates of all Ingram Micro teams.
The Method
I first created the product roadmap to outline MVP and future production features. Overseeing a team of 8, I touched upon each step of the process, beginning with an internal company research survey where we discovered 65% of associates struggle with information discovery. Concurrently, to build our MVP, we gathered Confluence documentation, technical and non-technical, of a singular product. We utilized a RAG framework with multiple large language models that served as agents so our model would not only have a conversation, but also “think” intelligently. We built the frontend using React for users to type and converse with. We found our completion rate was 85%, response time was 5 seconds, and was 700% more efficient in information discovery.
What I would have done differently
I would have liked to incorporate a metric to measure the money it costs to run the model vs the efficiency improvement of users. This can provide a monetary value to the amount of time users can save. Since data access was limited, we were limited to only one Confluence section. I would have liked to gather more data so that a wider array of users can test the product. The testing sample size was ~10 people because of hardware limitations. Given the appropriate access, we could have run this on cloud, bypassing any hardware inabilities.