Are you thinking about creating a speech bot-driven app for your business?Apps Archives Some of the guidelines around bot creation—as outlined at Microsoft Build by noted Swedish entrepreneur, podcaster, and Windows Platform Development MVP Jessica Engstrom—are common sense. For example, don't build a voice bot just because it's cool new technology, and make sure it fits your business model.
But there are plenty of scenarios where voice does fit. One argument is that the average person types 40 words per minute but speaks 150. Approximately 3,000 new bots are released per week on the Microsoft platform alone, and 95 percent of smartphone owners have tried a personal assistant.
It's not all smooth sailing, though. Engstrom mentioned Microsoft's own disastrous voice-plus-AI experiment, Tay, which the company had to pull in less than a day after the internet taught it to be racist. And she pointed to Burger King, which ran a commercial designed to trigger Google Home but instead read a Wikipedia page saying the Whopper contained cyanide.
When designing a voice assistant, you should limit the scope of possible answers, Engstrom said. Don't have it ask open-ended questions. Train the voice assistant to handle many ways of phrasing a question or command. Even write a full script of a conversation that makes sense for your bot. Finally, provide audio help, giving examples of what kind of things a user can say.
One of the big announcements at the Build Keynote was the ability to transcribe multiparty speech in meetings while keeping track of which speaker said what. In a separate session, Aarthy Longino, Principal Program Manager for Speech and Language at Microsoft, showed this working in a custom development interface.
At last year's Build, the biggest hit was a meeting "cone" that recognized participants and transcribed what each said. Now that cone, which also sports a 360-degree camera, is being tested by Microsoft customers in private preview. But there are other devices that anyone can get to test the transcription, including the Roobo Smart Audio Dev Kit, which was impressively demoed in the session.
You can find these Cognitive Services Speech Devices at aka.ms/sdsdk-get.
On the other end of speech, and at least as impressive, is text to speech (TTS). Microsoft's Qinying Liao, a Principal Program Manager on Speech Services, showed advances in things like the remarkably natural-sounding new Neural Voices, which was so smooth that attendees in the room voted for it over an actual human reader.
Currently, Neural Voices are only available for nine regional English dialects, but Japanese, Spanish, and Portuguese are in the works.
Another new capability is to add emotion to the TTS: a simple keyword in code can make the generated voice sound cheerful or empathetic. That works the other way, too. In fact, Microsoft's transcription technologies for call centers can detect when an interaction starts to go negative. The Speech Services will let businesses customize recognition and TTS using their own terminology in a new Custom Speech Portal. You can read about all the Azure Speech Services at this help page.
Topics Microsoft
(Editor: {typename type="name"/})
44 GPU Fortnite Benchmark: The Best Graphics Cards for Playing Battle Royale
Tesla's street visualization screen now displays traffic cones
McDonald's and Burger King get graded on their beef. Which one got an F?
Facebook proves once again that no scandal is big enough to really matter
Seven Steam games whose reviews have changed a lot
Adorable couple fall in love in 'Final Fantasy XIV' and have an in
Ponies get matching sweaters for trip to meet their relatives
J.K. Rowling is a proud dog mom and wants everyone to know it
Popeye's chicken sandwich is back and the lines are so damn long
Amazon Big Spring Sale 2025: Best deals under $50
Wikipedia citations get way more legit with the addition of books
接受PR>=1、BR>=1,流量相当,内容相关类链接。