Besides the standard search and file upload interfaces, K2 data may be retrieved in a
number of ways as described below. Most of these options are now possible
because catalogs, light curves (coming), Full Frame Images, and Target Pixel Files
are all stored online in publicly-accessible directories.
FTP and HTTP
Individual files may be downloaded via FTP or through your browser (HTTPS) to download K2 data
and catalogs. For FTP, connect to archive.stsci.edu anonymously and cd to pub/k2
You will see the available directories using ls. For HTTP, just go to
Examples for the browser paths to light curves and target pixel files
are shown below, where XXXXYYZZZ is the EPIC ID and N is the campaign number.
K2 tar files are created after the light curves are received for each campaign.
Each set of tar files will contain the light curves for that campaign in tarfiles no larger than 5 GB.
They can be downloaded through your browser at URL
The files are also available via anonymous ftp (connect to archive.stsci.edu, cd /pub/k2/lightcurves/tarfiles
Currently the only downloadable K2 catalog is a CSV version of EPIC.
See the EPIC entry on the search & retrieve
page for more information.
WGET and CURL Scripts
If your system supports Wget or Curl, there are several other options for retrieving
data, which will create a shell script on your desktop computer.
You may then run the script from the command line to copy the requested
files directly to your computer. One advantage to using shell scripts
is that large requests are submitted one file at a time avoiding memory issues
on the MAST servers.
Note these scripts are primarily intended for Linux,
Unix, and Mac users but alternatives may exist for Windows users.
Note that Macs are not shipping with wget installed for systems 10.9 and later. If you have installed wget from external
source such a fink, the version of wget may not work with our systems. The most current version does seem to work.
We currently offer 2 methods for generating shell scripts of CURL or WGET
commands. Either method will create a script file on your desktop computer
that can be run to download the found files (e.g, using the sh command).
We also give examples of how to create your own WGET commands.
If you know what data you want, a quick way to create shell scripts is
to use one of our
available IDL or Python programs. These programs
accept several parameters for specifying ID numbers, cadence, dates, quarters,
data type, and command type. For example (assuming IDL is installed on your desktop
return all available long-cadence target pixel files for K2 ID 7730747:
get_k2, '205248134', data_type="target_pixel_file".
A python example to retrieve TPF files for ID 205248134:
python get_k2.py '205248134' -t target_pixel_file.
Type python get_k2.py -h to see all the available python arguments.
Here are some examples of creating your own wget commands
where instead of retrieving one file per command (as above), you
retrieve entire directories:
Download a whole directory of data using WGET
wget -q -nH --cut-dirs=6 -r -l0 -c -N -np -R 'index*' -erobots=off https://archive.stsci.edu/pub/k2/target_pixel_files/c2/205200000/48000/
If you know exactly which datasets you want, another method for retrieving data
(and bypassing the search step)
is to use the dataset retrieval page at
The list of datasets can be entered with a space or a comma delimiter,
or as an uploaded file,
but they must be specified with both the EPIC ID and the campaign number of the
observation in a form like: KTWO202483641-C02
(or with a cadence type like KTWO202483641-C02;LC).