You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

Introduction

The Extracted Features (EF) dataset contains informative characteristics of the text at the page level from public domain volumes in the HathiTrust Digital LIbrary (HTDL). THese are slightly more than 5 million volumes, representing about 38% of the total digital content of the HTDL.

 Rationale

Texts from the HTDL corpus that is not in the public domain are not available for download, which limits its usefulness for research. However,  a great deal of fruitful research, especially in the form of text mining, can be performed on the basis of non-consumptive reading using extracted features (features extracted from the text) even when the full text is not available. To this end, the HathiTrust Research Center (HTRC) has started making available a set of page-level features extracted from the HTDL's public domain volumes. These extracted features can be the basis for certain kinds of  algorithmic analysis. For example, topic modeling works with bags of words (sets of tokens). Since tokens and their frequencies are provided as features, the EF dataset can enable a user to perform topic modeling with the data.


Worksets and the Extracted Features (EF) Dataset

 

Currently, the extracted features dataset is being provided in connection with worksets. (If you are not familiar with HathiTrust worksets, you may want to review the tutorial available elsewhere in this Wiki regarding the HTRC Workset Builder.)

The EF dataset for any HTRC workset can be retrieved as follows. A user first creates a workset (or choose an existing workset) from the HTRC Portal. The EF datasets for the workset are transferred via rsync, a robust file synchronization/transfer utility. The user executes the EF rsync script generator algorithm (available as one of the algorithms provided at the HTRC Portal) with that workset. This produces a script that the user can download and execute on his/her own machine. When executed on the user’s machine, the script transfers the EF data files for that workset from the HTRC’s server to the user’s hard disk, resulting, for each volume in the selected workset, in two zipped files containing “basic” and “advanced” EF data in JSON (JavaScript Object Notation) format — a commonly used lightweight data interchange format.


Content of an EF Dataset

 

An EF data file for a volume consists of volume-level metadata, and of the extracted feature data for each page in the volume, in JSON format.  The volume-level metadata consists of both metadata about the volume and metadata about the extracted features.

Metadata about the volume consists of the following pieces of data:

 

 

 

1.schemaVersion: A version identifier for the format and structure of this metadata object.

 

2.dateCreated: The time this metadata object was processed.

 

3.title: Title of the given volume.

 

4.pubDate: The publication year.

 

5.language: Primary language of the given volume.

 

6.htBibUrl: HT Bibliographic API call for the volume.

 

7.handleUrl: The persistent identifier for the volume.

 

8.oclc: The array of OCLC number(s).

 

9.imprint: The publication place, publisher, and publication date of the given volume.

Metadata about the extracted features consists of the following pieces of data:

1.schemaVersion: A version identifier for the format and structure of the feature data.
2.dateCreated: The time the batch of metadata was processed and recorded.
3.pageCount: The number of pages in the volume.
4.pages: An array of JSON objects, each representing a page of the volume. 

 

 

  • No labels