Creado por Mark Davies (BYU).
Subvencionado por el programa National Endowment for the Humanities de Estados Unidos (2001-2002, 2015-2017).
Parte de la colección de corpus de BYU.
This is our interface for the
Google Books n-grams
data. It is based on 45 billion words in tens of millions of books from the
1800s-2000s. The n-grams data does not allow the full range of queries that a
normal corpus would, but you can still find the frequency of words and phrases
over time, as well as finding the collocates of a given word (including the
collocates over time, to see semantic change). And because it's based on 45 billion words,
it is incredibly rich data.