This project is an experiment that uses Tensorflow.js and your webcam to make Amazon Echo respond to sign language. It began with a simple question:
"If voice is the future of computing what about those who cannot hear or speak?"
You can find the code for this demo on github or watch a video of it in action.
Note: This demo runs entirely in your browser without any data sent to any server