This documentation has been updated for the newest format of URLs for the Extracted Features dataset, intended for release in August 2016. This format no longer has basic and advanced features described in separate files. If you are looking for information on the earlier format, see revision 32 of this page.
The Extracted Features functionality (currently under beta release) is one of the ways in which users of HTRC's tools can perform non-consumptive analysis of texts in the HathiTrust Digital Library's corpus. This article explains how you can use the HTRC's workset functionality to download the EF files for a personalized collection. Currently, this functionality is available only for the HathiTrust Digital Library's public domain corpus, consisting of slightly less than 5 million volumes.
If you have not yet created a workset, the Portal & Workset Builder Tutorial for v3.0 includes information on creating an account and building a Workset.
While logged into the HTRC Portal with the account you created your workset with, click on the 'Algorithms' link near the top of the screen.
From the list of algorithms that shows up, click on EF_Rsync_Script_Generator. This 'algorithm' generates a script for downloading the feature data files that correspond to your workset.
Specify a job name of your choosing. You also have to select a workset that the EF_Rsync_Script_Generator algorithm will run against: Check the button saying “Select a workset from my worksets” and select your desired workset. Your screen should now look like the figure below. At this point, click the ‘Submit’ button.
You can now see the status of the job, as shown below. The status of the job will initially show as “Staging”. (Refresh the screen after some time and you will see the status has changed to “Queued”. )
Eventually, the job will have “completed”, and the screen, on refreshing, will look as follows. Click on the link representing the job name.
At this time, the screen should look like the following:
At the very bottom left of your browser window, you will see a message like the following. (The number you see within the parentheses may vary, depending on how many times you have executed this step before. If doing this step for the first time, there will be no parentheses.) Press the “Keep” button.
At this point, the script will be downloaded to your computer’s hard disk, and you will see the message at the bottom left of your browser window be replaced by just the name of the downloaded file:
Windows users please note: Before proceeding, Windows users will need to complete additional steps to prepare their machine to work with rsync. Please follow the directions here.
After you download the script, from the command line navigate to the directory where the script file is located. This directory will typically be called Downloads, though the location may be different depending on your machine and if you have moved the file. Here is an example:
Once you are in the directory where the file is located, you may be interested in checking the file size to verify that the script exists:
ls -l EF_Rsync.sh
Then run the file you downloaded. It is a shell script. When you run it, a basic features data file and an advanced features data file for each volume in your workset will be transferred to your hard disk via the rsync utility.
If your workset contained N volumes with HathiTrust volume IDs V1, V2, V3,... VN respectively, then executing the shell script as shown above will cause the following feature data files for the corresponding volumes to be transferred to your computer’s hard disk via rsync: V1.json.bz2, V2.json.bz2, V3.json.bz2, ..., VN.json.bz2
The workset in our example only contained one volume, Buch der Lieder by Heinrich Heine with the HathiTrust volumeID mdp.39015012864743. The corresponding file is called mdp.39015012864743.json.bz2.
Because the feature data files are compressed, you may need to uncompress them into JSON-formatted text files, depending on your need. The compression used is bzip2. If you are using the files with the HTRC Feature Reader, the library will deal with the compression automatically.