This project is an experiment that uses Tensorflow.js and your webcam to make Amazon Echo respond to sign language. It began with a simple question:
"If voice is the future of computing what about those who cannot hear or speak?"
You can find the code for this demo on github or watch a video of it in action.
Note: This demo runs entirely in your browser without any data sent to any server
Status: Not Ready
Terminal Word - A word that appears as the last word in your query
e.g If you intend to ask "What's the weather?" & "What's the time?" then the weather and the time are terminal words whereas what's is not
Training: 2 words