Wixie takes advantage of AI and advanced predictive algorithms to support and enhance a student’s multimodal communications.
When students use voice dictation, machine language pulls from a range of data to produce actual English words (and Spanish words if your OS is set to Spanish). If a device is user-specific, dictation becomes even more accurate as the platform learns how the user speaks.
Wixie's animated, talking images support AI-powered lip-syncing to generate visemes.
When young learners search for images, Wixie uses advanced algorithms to determine the best response based both on actual typing as well as what the user’s input is likely to mean. For example, searching "jrfe" will return giraffe images.
Removing backgrounds from images is done using AI machine learning to identify human figures in images and turning the rest of the image transparent so that it can be placed over other images.
Wixie is a tool for students to demonstrate their learning by combining writing with images, voice narration, video and more. Tech4Learning works hard to ensure that Wixie’s AI implementations do not do a student’s work for them, but serve to make multimodal communication accessible to more users, easier to achieve, and enhance their ability to effectively share their ideas and learning.
There will likely be many more AI features in Wixie, but from our design and development perspective, they have to match the purpose of Wixie, which is to help young learners build and create as they learn to read, write, and build math foundations.