The Challenge
Having spent many years in I.T., I’ve seen my fair share of service desks, user self-help guides, and a plethora of different methods to try and quickly and efficiently triage end-user I.T. woes.
Great advancements have been made in the field of AI, natural language processing and UX design.
But all these technologies cost a lot to purchase, are hard to implement by ones self and often lack the accuracy and clarity of target audience to make them truely effective.
The Goal
I wanted to see if it was possible to build out a self-service platform that is easy to use, accurate, performant, easy to deploy, economic and can be fully understood by anyone with a modicum of technical knowledge.
Solution
Ollama comes to the rescue on this one straight away. I was able to build a customised LLM based on the llama3 model with a web ui for administration and a tiny widget for end-user interactions.
The model contains all the knowledge of llama3 but it is restricted to particular subjects. I added source data gleened from a number of user guides for a range of products and modified the source such that ollama could better make suggestions for actions by users.
Finally, I developed a method within the webui based on outputs from ollama which replaces ollama trigger words with urls end-users can click on which can take them to either 3rd party websites, knowledge articles or pre-populated tickets within an incident management system also hosted locally.
Conclusion
Whilst set-up is straight-forward and the fine-tuning not too complex, there were some challenges around source data for particular products and the types of problems.
It goes to show that an AI model can only be truely effective if the source data (such as a long-standing and accurate knowledge base) is available at the outset.
Thoroughly good fun nonetheless and a project I would consider pushing into a dev environment in the real world but, perhaps, not a production one at this stage!