Aimybox in search: framework for creating voice assistants needs an iOS wizard

Everyone around us is talking about voice assistants, Alice, Google Assistant, that they know how, what they don’t know ... And we took and wrote a framework for creating mobile voice assistants. Yes, and open source! So far we have done this only for Android , and now we are looking for a cool iOS developer who can easily port Kotlin code to Swift.



Under the cat, we’ll tell you why we do this at all, what we did and who exactly we are looking for in the Aimybox team.







It happened,



that we at Just AI have been doing talking robots, voice assistants and all kinds of chatbots for a long time. Under the hood, we have our own NLP technologies (natural language processing) and a whole platform, visual designers and all-all-all.



Meanwhile, the market for talking devices



and applications are growing and blooming ! Not only Amazon, Google, Yandex - dozens of companies, from small to large, are striving to create and launch their voice assistant or device.



"What for? Who needs it? ”



- exclaim users. And they are told - “The voice assistant is cool! Soon everyone will talk to them only! ” No, it is, of course, convenient. If the assistant is smart, understands everything and works quickly and quickly. But if you look at this matter from the other side, we will see that ...



Business needs one voice feature



Well, or a couple. But most importantly, you need to quickly and easily add a voice assistant to an existing mobile application . And so that after that you can customize as you like.



How it looks in practice. There is a mobile application, it has many buttons and all sorts of other UI elements. For example, a mobile bank. What does the bank want? For the microphone button to appear in the application, the user clicks on it and says: "Transfer money to mom . " Instead of three tapas on the screen - one. Then the application can simply open one of its screens with the recipient field already filled in.



Or, "Where do they give out dollars?" And the application opens a card with ATMs of our bank, which can issue this business within a radius of a kilometer from the user.



What's so hard?



It would seem that he simply added a button to the application, connected some kind of speech recognition, a speech synthesizer, the NLP engine there, made a beautiful GUI with a scroll, showing the speech recognition process (no worse than Google), synchronized everything, tested it. Bugs caught. I realized that everything is not so simple ...







So we thought



we can create a framework that hides all this complexity under the hood (covered with tests) and allows the developer to quickly add a voice assistant with the necessary functions to his already working mobile application. We have eaten all the dogs on the way to creating voice applications for a long time and we know what pitfalls are there.



What happened







Aimybox ! An open, free, custom SDK and a ready-made voice assistant that you can add to your mobile application, like an online chat on a site. In it, we embody all our experience in creating speech solutions. But at the same time, we do not tie the assistant to any specific recognition, synthesis, and NLP engines. Well, so that you can use any engines in your assistant independently of each other, and Aimybox correctly synchronizes their work. He also has a beautiful UI!



Here's what we talked about Aimybox at the conversational AI Conversations conference:





The open source really drives,



because third-party developers (the same banks) must have full control over what they embed in their applications. It's still a voice interface, you never know what it does inside ...



And there inside



There are ready-made modules of various speech recognition and synthesis engines, NLP and voice activations. There is a ready-made customizable GUI assistant . There is documentation and an example of how to easily and easily implement it in your application. But all this is for Android only!



Therefore, we are looking for iOS masters,



who is eager not to join the project for free and port Kotlin to Swift. And I’m ready to create the world's first open voice assistant for iOS, the code of which is not a shame to publish on Github to the general condemnation of the community.



What if you are the one who can read beautiful Kotlin and write no less beautiful Swift? Write to che@just-ai.com . You are waiting for the Just AI team, the world of voice assistants, speech recognition and synthesis, NLP and a great reason to add another cool project to your portfolio!










All Articles