A warm and sunny Friday evening in Munich: around 300 like-minded people are gathered, drinking and chatting. What brought them together was their shared interest in new and exciting technologies and business opportunities at the Grand Event about Chatbots and Voice Assistants. I spoke about developing voice assistants (Alexa skills) in a professional way and want to share my perspective on the event, outline my talk and provide the slides of it.
The event took place at Burda Bootcamp and was organized by the Technology Messenger Munich Meetup. Talks were held in two tracks: the business track with where startups and agencies shared their experiences in offering and distributing chatbots and voice assistants and a technology track where the focus was on how to actually build and develop them. My talk was at the technology track so I stayed there.
All started with a presentation of Patrick Blitz from ChallTell, who showed how to create an Aelxa skill within minutes by using predefined backend templates. Unfortunately the Wifi and audio setup at the location had serious problems and all talks that relied on an internet connection or on playing audio suffered from that.
The next talk was held by Boris Bokowski from Google. He presented his voice assistant realized with a raspberry pi, API.ai, Google Actions and Firebase functions. Since we are always try to be independent of platform, service and tool providers we are anyway interested in Google actions and I was happy to see that the concepts are quite comparable. We will definitely play around with Google actions especially because we are using API.ai already for some of our chatbot projects (stay tuned).
Daniel Heinze, Tech Evangelist for the Microsoft Bot Framework showed how easy it is to write your bot once and then publish it on several channels, such as Telegram, Facebook Messenger, Skype and so on. The interesting part was how seamlessly it integrates into windows 10 and Cortana. Also worth a try, especially, when your bot does not rely on features of one specific platform.
The most interesting talk that evening was about rasa.ai from Alexander Weidauer. Good to see that there are open source alternatives at the level of the tech giants. Alexander first presented the state of the art natural language understanding capabilities. And currently rasa is working on a framework that is said to be able to understand whole dialogs with the help of several layers of deep learning networks. We are curious!
Next was my talk about Voice Assistants and how to develop them in a professional way. You can find the slides on SlideShare: