Skip to content

Access to the notebooks

Access the source code

01-Work with STAC Catalogs on the dCache Storage

Work with STAC Catalogs on the dCache Storage

This example includes two Jupyter notebooks that illustrate how to: * search for scenes from the Sentinel-2 mission as part of a open dataset on AWS * save the metadata in the form of a SpatioTemporal Asset Catalog on the SURF dCache storage system. * retrieve some of the scenes' assets. * doing some simple processing on the retrieved assets using a Dask cluster to distribute workload.

Additional dependencies

The example requires the packages listed in the conda environment file provided in this folder.

02-Filter large AHN3 point-cloud files

Filter large AHN3 point-cloud files

This example includes a Jupyter notebook that illustrates how to filter/sample large LAZ files from the Actueel Hoogtebestand Nederland (AHN) - version 3 dowloaded to the SURF dCache storage.

Additional dependencies

The example requires the packages listed in the conda environment file provided in this folder.

03-Phenology use-case for RS-DAT

Phenology use-case for RS-DAT

This use case has moved to a dedicated repository: https://github.com/RS-DAT/Phenology

04-Machine learning use-case for RS-DAT

There are two notebooks created for running a machine learning prediction problem on two datasets with different size. Depending on the infra e.g. SURF research cloud, or Snellius, the I/O paths and reading, writing files can be different.

Big data is only available on snellius at /gpfs/work2/0/ttse0619/DAT_data whereas Small data on dcache.