2 minute read

Guru to Go: a speech-controlled meditation assistant and sentiment tracker

Here, you see a Demo video of a voice-controlled meditation assistant that we worked on in the course “Conversational Agents and speech interfaces” Course Description

The central goal of the entire project was to make the Assistant be entirely speech controlled, such that the phone needn’t be touched while immersing yourself in meditation.

The Chatbot was built in Google Dialogflow, a natural language understanding engine that can interpret free text input and identify entities and intents within it, We wrote a custom python backend to then use these evaluated intents and compute individualized responses.

The resulting application runs in Google Assistant and can adaptively deliver meditations, visualize sentiment history and comprehensively inform about meditation practices. Sadly, we used beta functionality from the older “Google Assistant” Framework, which got rebranded months after by Google into “Actions on Google” and changed core functionality requiring extensive migration that neither Chris, my partner in this project, nor I found time to do.

Nevertheless, the whole Chatbot functioned as a meditation player and was able to graph and store recorded sentiments over time for each user.

Attached below you can also find our final report with details on the programming and thought process.

Read the full report

Look at the Project on GitHub

After this being my first dip into using the Google Framework for the creation of a speech assistant and encountering many problems along the way that partly found their way also into the final report, now I managed to utilize these explorations and am currently working to create Ällei, another chatbot with a different focus, which is not realized within Actions on google, but will rather be getting its own react app on a website.