el corpus del espaņol

Corpus size
Compare to other corpora
   Larger corpora
Related resources
Contact us


English Espaņol

Created by Mark Davies, BYU. Funded by the US National Endowment for the Humanities (2001-2002, 2015-2017).

  Corpus Size Created More info
1 Genre / Historical 100 million words 2001 Info
2 Web / Dialects 2 billion words 2016 Info
3 NOW (2012 - 2019) 5.5 billion words 2018 Info
4 Google Books n-grams (BYU) 45 billion words 2011 Info
5 WordAndPhrase Top 40,000 words 2017 Info

This is our interface for the Google Books n-grams data. It is based on 45 billion words in tens of millions of books from the 1800s-2000s. The n-grams data does not allow the full range of queries that a normal corpus would, but you can still find the frequency of words and phrases over time, as well as finding the collocates of a given word (including the collocates over time, to see semantic change). And because it's based on 45 billion words, it is incredibly rich data.