The Extracted Features (EF) dataset contains informative characteristics of the text at the page level from public domain volumes in the HathiTrust Digital LIbrary (HTDL). THese are slightly more than 5 million volumes, representing about 38% of the total digital content of the HTDL.
Texts from the HTDL corpus that is not in the public domain are not available for download, which limits its usefulness for research. However, a great deal of fruitful research, especially in the form of text mining, can be performed on the basis of non-consumptive reading using extracted features (features extracted from the text) even when the full text is not available. To this end, the HathiTrust Research Center (HTRC) has started making available a set of page-level features extracted from the HTDL's public domain volumes. These extracted features can be the basis for certain kinds of algorithmic analysis. For example, topic modeling works with bags of words (sets of tokens). Since tokens and their frequencies are provided as features, the EF dataset can enable a user to perform topic modeling with the data.
Worksets and the Extracted Features (EF) Dataset
Currently, the extracted features dataset is being provided in connection with worksets. (If you are not familiar with HathiTrust worksets, you may want to review the tutorial available elsewhere in this Wiki regarding the HTRC Workset Builder.)
Content of an EF Dataset
An EF data file for a volume consists of volume-level metadata, and of the extracted feature data for each page in the volume, in JSON format. The volume-level metadata consists of both metadata about the volume and metadata about the extracted features.
Metadata about the volume consists of the following pieces of data:
Metadata about the extracted features consists of the following pieces of data: