M5 Master Quizzer AI

Team

  • Dustin Reimann
  • Felix Huber
  • Michael Hurst

Supervision

Christian Kiefer

📋 Process

In our project, we managed the process using a Trello board, a helpful tool for organizing tasks and responsibilities. The Kanban board allowed us to efficiently distribute tasks among the team members and track the progress of each assignment. This method facilitated clear communication and coordination throughout the project, ensuring that everyone knew their roles and deadlines. We also held meeting sessions every Monday via Microsoft Teams. This helped us stay on track, kept us in discussion, and we were able to plan ahead for the current week. Regular communications and text chat happened through Discord.

Process Trello
Trello board

📅 Project start

As the OpenAI API Tools and AI, in general, were relatively new to us, we dedicated the first two weeks to really understanding and researching what and fine-tunings do and how to improve our prompts. We decided on specialists for each big task, with Dustin being our Flutter expert and principal developer, Felix being responsible for fine-tuning, training our model and creating an easy-to-use fine-tuning assistant, and Michael being responsible for embeddings with a semantic search functionality, figuring out ways to import and clean up documents for embeddings, and doing general project management to keep the project on track. Electing specialists and having only 3 members in the team was a huge advantage for us. This way everybody knew their craft, could teach the others about their findings, and work in a really focused manner. This also made project management easier, as dividing tasks was trivial, and therefore we had clean communication and more opportunities for precise timekeeping.

💢 Challenges

One of the biggest challenges was wrapping our heads around using an AI API as a new and exciting technology. First in gathering abstract and technical understanding and then not messing up our bank. We had to be careful when training and testing models, executing multiple prompts, and so on, to not cause bankruptcy on our OpenAI wallet. Although having an adequate budget, we managed to not spend millions of tokens through human error.

Another challenge was the import and preparation of factually correct fine-tuning training data, data transformation, as well as ensuring JSON compatibility on various ends. Importing documents to be used as another source of knowledge (besides the AI’s knowledge) also meant that we had to find a way of information extraction. We managed to do this by having OpenAI clean up our text, taking out redundant or irrelevant information like page numbers, and then apply our formatting before embedding for semantic search. Prompt engineering played a big role in this.

We also continualy refined the UI to make the application more usable overall. Our result is an easy-to-use Question Generator, with advanced features like the capability of user-supplied knowledge and a custom model specifically made for question generation, ensuring an improved question output.

✅ Project Results

We realized the main requirements of the project tender and contributed our own features and ideas in the process. At the same time we were always in communication with Mobile Learning Labs to create a common vision. Etensive documentation was provided, which can be used by the customers of Mobile Learning Labs.