Created by Mark Davies.
Funded by the US National Endowment for the Humanities
(2001-2002, 2015-2017).
This is our interface for the
Google Books n-grams
data. It is based on 45 billion words in tens of millions of books from the
1800s-2000s. The n-grams data does not allow the full range of queries that a
normal corpus would, but you can still find the frequency of words and phrases
over time, as well as finding the collocates of a given word (including the
collocates over time, to see semantic change). And because it's based on 45 billion words,
it is incredibly rich data.
|
|