In the fall of 2019, Google tweaked its search algorithm.
The company knew that people tended to type their queries in “keyword-ese,” rather than phrasing them the way they would speak to another human, so its researchers developed a new technique that sought to glean meaning from whole phrases or sentences rather than individual keywords. With this change, when presented with a search like “brazil traveller to canada need visa,” Google can now spot the crucial word “to,” assess its context, and return only results about travelling from Brazil to Canada and not vice versa.
Such advances often feel small or incremental. Who has not become a bit blasé about the steadily growing competence of Siri and her virtual peers? But Google’s development, which it dubbed Bidirectional Encoder Representations from Transformers, or BERT, marked a bigger step. “Everything changed afterwards,” says Jonathan Berant, a former postdoctoral researcher at Google who is now a professor at Tel-Aviv University’s Blavatnik School of Computer Science. “And BERT is the model that started this revolution.”