Computers can't actually read lips in real-time, but Meta's neural network program, AV-HuBERT, using what's called self-supervised learning, is able to improve in scores of match words to videos of people's lip movements. One outcome could be better video transcription software. Read more: https://zd.net/3GtYmPx